2026-03-01 00:00:07.090322 | Job console starting 2026-03-01 00:00:07.121103 | Updating git repos 2026-03-01 00:00:07.324425 | Cloning repos into workspace 2026-03-01 00:00:07.601520 | Restoring repo states 2026-03-01 00:00:07.628694 | Merging changes 2026-03-01 00:00:07.628719 | Checking out repos 2026-03-01 00:00:08.166167 | Preparing playbooks 2026-03-01 00:00:09.014944 | Running Ansible setup 2026-03-01 00:00:16.838855 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-01 00:00:18.637424 | 2026-03-01 00:00:18.639306 | PLAY [Base pre] 2026-03-01 00:00:18.697771 | 2026-03-01 00:00:18.697894 | TASK [Setup log path fact] 2026-03-01 00:00:18.728438 | orchestrator | ok 2026-03-01 00:00:18.764784 | 2026-03-01 00:00:18.765334 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-01 00:00:18.835776 | orchestrator | ok 2026-03-01 00:00:18.852372 | 2026-03-01 00:00:18.852473 | TASK [emit-job-header : Print job information] 2026-03-01 00:00:18.956215 | # Job Information 2026-03-01 00:00:18.956353 | Ansible Version: 2.16.14 2026-03-01 00:00:18.956383 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-01 00:00:18.956413 | Pipeline: periodic-midnight 2026-03-01 00:00:18.956460 | Executor: 521e9411259a 2026-03-01 00:00:18.956482 | Triggered by: https://github.com/osism/testbed 2026-03-01 00:00:18.956500 | Event ID: fedfdad55dd44eabbad10e545a96ba9f 2026-03-01 00:00:18.965242 | 2026-03-01 00:00:18.965342 | LOOP [emit-job-header : Print node information] 2026-03-01 00:00:19.220494 | orchestrator | ok: 2026-03-01 00:00:19.220701 | orchestrator | # Node Information 2026-03-01 00:00:19.220732 | orchestrator | Inventory Hostname: orchestrator 2026-03-01 00:00:19.220753 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-01 00:00:19.220771 | orchestrator | Username: zuul-testbed01 2026-03-01 00:00:19.220789 | orchestrator | Distro: Debian 12.13 2026-03-01 00:00:19.220808 | orchestrator | Provider: static-testbed 2026-03-01 00:00:19.220826 | orchestrator | Region: 2026-03-01 00:00:19.220844 | orchestrator | Label: testbed-orchestrator 2026-03-01 00:00:19.220873 | orchestrator | Product Name: OpenStack Nova 2026-03-01 00:00:19.220891 | orchestrator | Interface IP: 81.163.193.140 2026-03-01 00:00:19.245890 | 2026-03-01 00:00:19.245991 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-01 00:00:20.260649 | orchestrator -> localhost | changed 2026-03-01 00:00:20.268802 | 2026-03-01 00:00:20.268919 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-01 00:00:23.658397 | orchestrator -> localhost | changed 2026-03-01 00:00:23.682802 | 2026-03-01 00:00:23.682943 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-01 00:00:24.561278 | orchestrator -> localhost | ok 2026-03-01 00:00:24.567121 | 2026-03-01 00:00:24.567218 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-01 00:00:24.595472 | orchestrator | ok 2026-03-01 00:00:24.620296 | orchestrator | included: /var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-01 00:00:24.639089 | 2026-03-01 00:00:24.639194 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-01 00:00:28.513291 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-01 00:00:28.513463 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/work/0a59ffbc013c4e8c87d4525fe06cbd45_id_rsa 2026-03-01 00:00:28.513495 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/work/0a59ffbc013c4e8c87d4525fe06cbd45_id_rsa.pub 2026-03-01 00:00:28.513518 | orchestrator -> localhost | The key fingerprint is: 2026-03-01 00:00:28.513541 | orchestrator -> localhost | SHA256:ywKRuFSysrrQZuGPpSfcyORCcEe/xK0HX30xzkdM0x8 zuul-build-sshkey 2026-03-01 00:00:28.513559 | orchestrator -> localhost | The key's randomart image is: 2026-03-01 00:00:28.513587 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-01 00:00:28.513605 | orchestrator -> localhost | | . . +o| 2026-03-01 00:00:28.513624 | orchestrator -> localhost | | =.. oE=| 2026-03-01 00:00:28.513641 | orchestrator -> localhost | |. +.oo . . o +o| 2026-03-01 00:00:28.513658 | orchestrator -> localhost | |.+....= . . . + o| 2026-03-01 00:00:28.513675 | orchestrator -> localhost | |o.o... =S. . . | 2026-03-01 00:00:28.513695 | orchestrator -> localhost | |.+.. .o.o. | 2026-03-01 00:00:28.513712 | orchestrator -> localhost | |+==o. ..o | 2026-03-01 00:00:28.513728 | orchestrator -> localhost | |o+*=o . | 2026-03-01 00:00:28.513746 | orchestrator -> localhost | |..oo. | 2026-03-01 00:00:28.513762 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-01 00:00:28.513818 | orchestrator -> localhost | ok: Runtime: 0:00:02.447479 2026-03-01 00:00:28.519972 | 2026-03-01 00:00:28.520056 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-01 00:00:28.550218 | orchestrator | ok 2026-03-01 00:00:28.592101 | orchestrator | included: /var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-01 00:00:28.608897 | 2026-03-01 00:00:28.608993 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-01 00:00:28.631855 | orchestrator | skipping: Conditional result was False 2026-03-01 00:00:28.639461 | 2026-03-01 00:00:28.639558 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-01 00:00:29.295720 | orchestrator | changed 2026-03-01 00:00:29.311007 | 2026-03-01 00:00:29.311109 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-01 00:00:29.633511 | orchestrator | ok 2026-03-01 00:00:29.638672 | 2026-03-01 00:00:29.638757 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-01 00:00:30.198569 | orchestrator | ok 2026-03-01 00:00:30.221265 | 2026-03-01 00:00:30.221459 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-01 00:00:30.711941 | orchestrator | ok 2026-03-01 00:00:30.716825 | 2026-03-01 00:00:30.716925 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-01 00:00:30.753208 | orchestrator | skipping: Conditional result was False 2026-03-01 00:00:30.759745 | 2026-03-01 00:00:30.759843 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-01 00:00:32.056345 | orchestrator -> localhost | changed 2026-03-01 00:00:32.084854 | 2026-03-01 00:00:32.084985 | TASK [add-build-sshkey : Add back temp key] 2026-03-01 00:00:33.034910 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/work/0a59ffbc013c4e8c87d4525fe06cbd45_id_rsa (zuul-build-sshkey) 2026-03-01 00:00:33.035099 | orchestrator -> localhost | ok: Runtime: 0:00:00.018930 2026-03-01 00:00:33.041061 | 2026-03-01 00:00:33.041148 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-01 00:00:33.690064 | orchestrator | ok 2026-03-01 00:00:33.708850 | 2026-03-01 00:00:33.709045 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-01 00:00:33.732526 | orchestrator | skipping: Conditional result was False 2026-03-01 00:00:33.838734 | 2026-03-01 00:00:33.838923 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-01 00:00:34.559291 | orchestrator | ok 2026-03-01 00:00:34.595486 | 2026-03-01 00:00:34.595599 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-01 00:00:34.655064 | orchestrator | ok 2026-03-01 00:00:34.680595 | 2026-03-01 00:00:34.680709 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-01 00:00:35.751436 | orchestrator -> localhost | ok 2026-03-01 00:00:35.758795 | 2026-03-01 00:00:35.758917 | TASK [validate-host : Collect information about the host] 2026-03-01 00:00:37.866360 | orchestrator | ok 2026-03-01 00:00:37.891438 | 2026-03-01 00:00:37.891555 | TASK [validate-host : Sanitize hostname] 2026-03-01 00:00:37.982397 | orchestrator | ok 2026-03-01 00:00:37.987521 | 2026-03-01 00:00:37.987612 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-01 00:00:39.436758 | orchestrator -> localhost | changed 2026-03-01 00:00:39.441960 | 2026-03-01 00:00:39.442045 | TASK [validate-host : Collect information about zuul worker] 2026-03-01 00:00:40.158026 | orchestrator | ok 2026-03-01 00:00:40.170758 | 2026-03-01 00:00:40.170903 | TASK [validate-host : Write out all zuul information for each host] 2026-03-01 00:00:41.433445 | orchestrator -> localhost | changed 2026-03-01 00:00:41.443751 | 2026-03-01 00:00:41.443842 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-01 00:00:41.760905 | orchestrator | ok 2026-03-01 00:00:41.766432 | 2026-03-01 00:00:41.766520 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-01 00:02:11.619124 | orchestrator | changed: 2026-03-01 00:02:11.620868 | orchestrator | .d..t...... src/ 2026-03-01 00:02:11.620992 | orchestrator | .d..t...... src/github.com/ 2026-03-01 00:02:11.621021 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-01 00:02:11.621045 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-01 00:02:11.621067 | orchestrator | RedHat.yml 2026-03-01 00:02:11.638694 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-01 00:02:11.638712 | orchestrator | RedHat.yml 2026-03-01 00:02:11.638764 | orchestrator | = 1.53.0"... 2026-03-01 00:02:23.072415 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-01 00:02:23.088750 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-01 00:02:23.530715 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-01 00:02:24.490086 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-01 00:02:24.550211 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-01 00:02:25.280012 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-01 00:02:25.341771 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-01 00:02:25.836663 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-01 00:02:25.836756 | orchestrator | 2026-03-01 00:02:25.836774 | orchestrator | Providers are signed by their developers. 2026-03-01 00:02:25.836788 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-01 00:02:25.836801 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-01 00:02:25.836834 | orchestrator | 2026-03-01 00:02:25.836846 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-01 00:02:25.836854 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-01 00:02:25.836877 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-01 00:02:25.836885 | orchestrator | you run "tofu init" in the future. 2026-03-01 00:02:25.837122 | orchestrator | 2026-03-01 00:02:25.837136 | orchestrator | OpenTofu has been successfully initialized! 2026-03-01 00:02:25.837153 | orchestrator | 2026-03-01 00:02:25.837161 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-01 00:02:25.837169 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-01 00:02:25.837177 | orchestrator | should now work. 2026-03-01 00:02:25.837184 | orchestrator | 2026-03-01 00:02:25.837192 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-01 00:02:25.837199 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-01 00:02:25.837207 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-01 00:02:26.016305 | orchestrator | Created and switched to workspace "ci"! 2026-03-01 00:02:26.016379 | orchestrator | 2026-03-01 00:02:26.016393 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-01 00:02:26.016405 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-01 00:02:26.016415 | orchestrator | for this configuration. 2026-03-01 00:02:26.197999 | orchestrator | ci.auto.tfvars 2026-03-01 00:02:26.642431 | orchestrator | default_custom.tf 2026-03-01 00:02:31.376980 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-01 00:02:32.116838 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-01 00:02:32.355982 | orchestrator | 2026-03-01 00:02:32.356068 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-01 00:02:32.356081 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-01 00:02:32.356090 | orchestrator | + create 2026-03-01 00:02:32.356098 | orchestrator | <= read (data resources) 2026-03-01 00:02:32.356106 | orchestrator | 2026-03-01 00:02:32.356114 | orchestrator | OpenTofu will perform the following actions: 2026-03-01 00:02:32.356131 | orchestrator | 2026-03-01 00:02:32.356139 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-01 00:02:32.356147 | orchestrator | # (config refers to values not yet known) 2026-03-01 00:02:32.356154 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-01 00:02:32.356162 | orchestrator | + checksum = (known after apply) 2026-03-01 00:02:32.356170 | orchestrator | + created_at = (known after apply) 2026-03-01 00:02:32.356177 | orchestrator | + file = (known after apply) 2026-03-01 00:02:32.356185 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356215 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.356223 | orchestrator | + min_disk_gb = (known after apply) 2026-03-01 00:02:32.356230 | orchestrator | + min_ram_mb = (known after apply) 2026-03-01 00:02:32.356238 | orchestrator | + most_recent = true 2026-03-01 00:02:32.356246 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.356253 | orchestrator | + protected = (known after apply) 2026-03-01 00:02:32.356260 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.356271 | orchestrator | + schema = (known after apply) 2026-03-01 00:02:32.356279 | orchestrator | + size_bytes = (known after apply) 2026-03-01 00:02:32.356286 | orchestrator | + tags = (known after apply) 2026-03-01 00:02:32.356293 | orchestrator | + updated_at = (known after apply) 2026-03-01 00:02:32.356301 | orchestrator | } 2026-03-01 00:02:32.356318 | orchestrator | 2026-03-01 00:02:32.356326 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-01 00:02:32.356334 | orchestrator | # (config refers to values not yet known) 2026-03-01 00:02:32.356341 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-01 00:02:32.356349 | orchestrator | + checksum = (known after apply) 2026-03-01 00:02:32.356356 | orchestrator | + created_at = (known after apply) 2026-03-01 00:02:32.356363 | orchestrator | + file = (known after apply) 2026-03-01 00:02:32.356370 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356377 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.356384 | orchestrator | + min_disk_gb = (known after apply) 2026-03-01 00:02:32.356392 | orchestrator | + min_ram_mb = (known after apply) 2026-03-01 00:02:32.356399 | orchestrator | + most_recent = true 2026-03-01 00:02:32.356406 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.356413 | orchestrator | + protected = (known after apply) 2026-03-01 00:02:32.356421 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.356428 | orchestrator | + schema = (known after apply) 2026-03-01 00:02:32.356435 | orchestrator | + size_bytes = (known after apply) 2026-03-01 00:02:32.356469 | orchestrator | + tags = (known after apply) 2026-03-01 00:02:32.356476 | orchestrator | + updated_at = (known after apply) 2026-03-01 00:02:32.356483 | orchestrator | } 2026-03-01 00:02:32.356491 | orchestrator | 2026-03-01 00:02:32.356498 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-01 00:02:32.356506 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-01 00:02:32.356514 | orchestrator | + content = (known after apply) 2026-03-01 00:02:32.356522 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-01 00:02:32.356529 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-01 00:02:32.356536 | orchestrator | + content_md5 = (known after apply) 2026-03-01 00:02:32.356544 | orchestrator | + content_sha1 = (known after apply) 2026-03-01 00:02:32.356551 | orchestrator | + content_sha256 = (known after apply) 2026-03-01 00:02:32.356558 | orchestrator | + content_sha512 = (known after apply) 2026-03-01 00:02:32.356565 | orchestrator | + directory_permission = "0777" 2026-03-01 00:02:32.356573 | orchestrator | + file_permission = "0644" 2026-03-01 00:02:32.356580 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-01 00:02:32.356587 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356594 | orchestrator | } 2026-03-01 00:02:32.356605 | orchestrator | 2026-03-01 00:02:32.356612 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-01 00:02:32.356620 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-01 00:02:32.356627 | orchestrator | + content = (known after apply) 2026-03-01 00:02:32.356634 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-01 00:02:32.356642 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-01 00:02:32.356649 | orchestrator | + content_md5 = (known after apply) 2026-03-01 00:02:32.356656 | orchestrator | + content_sha1 = (known after apply) 2026-03-01 00:02:32.356663 | orchestrator | + content_sha256 = (known after apply) 2026-03-01 00:02:32.356670 | orchestrator | + content_sha512 = (known after apply) 2026-03-01 00:02:32.356677 | orchestrator | + directory_permission = "0777" 2026-03-01 00:02:32.356685 | orchestrator | + file_permission = "0644" 2026-03-01 00:02:32.356698 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-01 00:02:32.356705 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356713 | orchestrator | } 2026-03-01 00:02:32.356720 | orchestrator | 2026-03-01 00:02:32.356734 | orchestrator | # local_file.inventory will be created 2026-03-01 00:02:32.356741 | orchestrator | + resource "local_file" "inventory" { 2026-03-01 00:02:32.356749 | orchestrator | + content = (known after apply) 2026-03-01 00:02:32.356756 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-01 00:02:32.356764 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-01 00:02:32.356771 | orchestrator | + content_md5 = (known after apply) 2026-03-01 00:02:32.356778 | orchestrator | + content_sha1 = (known after apply) 2026-03-01 00:02:32.356786 | orchestrator | + content_sha256 = (known after apply) 2026-03-01 00:02:32.356793 | orchestrator | + content_sha512 = (known after apply) 2026-03-01 00:02:32.356800 | orchestrator | + directory_permission = "0777" 2026-03-01 00:02:32.356808 | orchestrator | + file_permission = "0644" 2026-03-01 00:02:32.356815 | orchestrator | + filename = "inventory.ci" 2026-03-01 00:02:32.356822 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356830 | orchestrator | } 2026-03-01 00:02:32.356837 | orchestrator | 2026-03-01 00:02:32.356845 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-01 00:02:32.356852 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-01 00:02:32.356859 | orchestrator | + content = (sensitive value) 2026-03-01 00:02:32.356867 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-01 00:02:32.356874 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-01 00:02:32.356881 | orchestrator | + content_md5 = (known after apply) 2026-03-01 00:02:32.356889 | orchestrator | + content_sha1 = (known after apply) 2026-03-01 00:02:32.356896 | orchestrator | + content_sha256 = (known after apply) 2026-03-01 00:02:32.356903 | orchestrator | + content_sha512 = (known after apply) 2026-03-01 00:02:32.356911 | orchestrator | + directory_permission = "0700" 2026-03-01 00:02:32.356918 | orchestrator | + file_permission = "0600" 2026-03-01 00:02:32.356925 | orchestrator | + filename = ".id_rsa.ci" 2026-03-01 00:02:32.356932 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356940 | orchestrator | } 2026-03-01 00:02:32.356947 | orchestrator | 2026-03-01 00:02:32.356954 | orchestrator | # null_resource.node_semaphore will be created 2026-03-01 00:02:32.356961 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-01 00:02:32.356969 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.356976 | orchestrator | } 2026-03-01 00:02:32.356986 | orchestrator | 2026-03-01 00:02:32.356994 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-01 00:02:32.357001 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-01 00:02:32.357009 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357016 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357023 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357030 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357038 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357045 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-01 00:02:32.357052 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357059 | orchestrator | + size = 80 2026-03-01 00:02:32.357067 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357074 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357081 | orchestrator | } 2026-03-01 00:02:32.357088 | orchestrator | 2026-03-01 00:02:32.357096 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-01 00:02:32.357103 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-01 00:02:32.357110 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357118 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357125 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357139 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357146 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357153 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-01 00:02:32.357161 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357168 | orchestrator | + size = 80 2026-03-01 00:02:32.357175 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357182 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357190 | orchestrator | } 2026-03-01 00:02:32.357197 | orchestrator | 2026-03-01 00:02:32.357204 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-01 00:02:32.357211 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-01 00:02:32.357219 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357226 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357233 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357240 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357248 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357255 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-01 00:02:32.357262 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357269 | orchestrator | + size = 80 2026-03-01 00:02:32.357276 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357284 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357291 | orchestrator | } 2026-03-01 00:02:32.357298 | orchestrator | 2026-03-01 00:02:32.357306 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-01 00:02:32.357313 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-01 00:02:32.357320 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357328 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357335 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357342 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357349 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357357 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-01 00:02:32.357364 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357371 | orchestrator | + size = 80 2026-03-01 00:02:32.357379 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357386 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357393 | orchestrator | } 2026-03-01 00:02:32.357403 | orchestrator | 2026-03-01 00:02:32.357410 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-01 00:02:32.357418 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-01 00:02:32.357425 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357432 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357452 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357459 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357467 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357478 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-01 00:02:32.357485 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357493 | orchestrator | + size = 80 2026-03-01 00:02:32.357500 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357507 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357514 | orchestrator | } 2026-03-01 00:02:32.357522 | orchestrator | 2026-03-01 00:02:32.357529 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-01 00:02:32.357536 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-01 00:02:32.357543 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357551 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357558 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357571 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357579 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357586 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-01 00:02:32.357593 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357600 | orchestrator | + size = 80 2026-03-01 00:02:32.357608 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357615 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357622 | orchestrator | } 2026-03-01 00:02:32.357629 | orchestrator | 2026-03-01 00:02:32.357637 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-01 00:02:32.357644 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-01 00:02:32.357651 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357658 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357666 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357673 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.357680 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357688 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-01 00:02:32.357695 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357702 | orchestrator | + size = 80 2026-03-01 00:02:32.357709 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357716 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357724 | orchestrator | } 2026-03-01 00:02:32.357731 | orchestrator | 2026-03-01 00:02:32.357738 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-01 00:02:32.357746 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.357754 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357761 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357768 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357775 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357782 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-01 00:02:32.357790 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357797 | orchestrator | + size = 20 2026-03-01 00:02:32.357804 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357812 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357819 | orchestrator | } 2026-03-01 00:02:32.357826 | orchestrator | 2026-03-01 00:02:32.357834 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-01 00:02:32.357841 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.357848 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357855 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357863 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357870 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357877 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-01 00:02:32.357884 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357892 | orchestrator | + size = 20 2026-03-01 00:02:32.357899 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.357906 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.357913 | orchestrator | } 2026-03-01 00:02:32.357924 | orchestrator | 2026-03-01 00:02:32.357931 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-01 00:02:32.357938 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.357946 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.357953 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.357960 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.357967 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.357974 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-01 00:02:32.357981 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.357994 | orchestrator | + size = 20 2026-03-01 00:02:32.358002 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358009 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358040 | orchestrator | } 2026-03-01 00:02:32.358047 | orchestrator | 2026-03-01 00:02:32.358054 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-01 00:02:32.358062 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.358069 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.358076 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358083 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358091 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.358098 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-01 00:02:32.358105 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358113 | orchestrator | + size = 20 2026-03-01 00:02:32.358120 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358127 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358135 | orchestrator | } 2026-03-01 00:02:32.358142 | orchestrator | 2026-03-01 00:02:32.358149 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-01 00:02:32.358157 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.358164 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.358171 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358182 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358195 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.358208 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-01 00:02:32.358217 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358228 | orchestrator | + size = 20 2026-03-01 00:02:32.358236 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358243 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358250 | orchestrator | } 2026-03-01 00:02:32.358257 | orchestrator | 2026-03-01 00:02:32.358264 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-01 00:02:32.358271 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.358279 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.358286 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358293 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358300 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.358307 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-01 00:02:32.358314 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358321 | orchestrator | + size = 20 2026-03-01 00:02:32.358329 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358336 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358343 | orchestrator | } 2026-03-01 00:02:32.358350 | orchestrator | 2026-03-01 00:02:32.358357 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-01 00:02:32.358364 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.358372 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.358379 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358386 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358393 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.358400 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-01 00:02:32.358407 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358415 | orchestrator | + size = 20 2026-03-01 00:02:32.358422 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358429 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358453 | orchestrator | } 2026-03-01 00:02:32.358465 | orchestrator | 2026-03-01 00:02:32.358477 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-01 00:02:32.358489 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.358515 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.358527 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358538 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358545 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.358553 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-01 00:02:32.358560 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358567 | orchestrator | + size = 20 2026-03-01 00:02:32.358574 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358582 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358589 | orchestrator | } 2026-03-01 00:02:32.358596 | orchestrator | 2026-03-01 00:02:32.358603 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-01 00:02:32.358610 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-01 00:02:32.358618 | orchestrator | + attachment = (known after apply) 2026-03-01 00:02:32.358625 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358632 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358639 | orchestrator | + metadata = (known after apply) 2026-03-01 00:02:32.358647 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-01 00:02:32.358654 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358661 | orchestrator | + size = 20 2026-03-01 00:02:32.358668 | orchestrator | + volume_retype_policy = "never" 2026-03-01 00:02:32.358675 | orchestrator | + volume_type = "ssd" 2026-03-01 00:02:32.358683 | orchestrator | } 2026-03-01 00:02:32.358694 | orchestrator | 2026-03-01 00:02:32.358702 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-01 00:02:32.358709 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-01 00:02:32.358716 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.358723 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.358731 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.358738 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.358745 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.358752 | orchestrator | + config_drive = true 2026-03-01 00:02:32.358759 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.358766 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.358774 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-01 00:02:32.358781 | orchestrator | + force_delete = false 2026-03-01 00:02:32.358788 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.358795 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.358802 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.358809 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.358816 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.358823 | orchestrator | + name = "testbed-manager" 2026-03-01 00:02:32.358831 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.358838 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.358845 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.358852 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.358859 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.358866 | orchestrator | + user_data = (sensitive value) 2026-03-01 00:02:32.358873 | orchestrator | 2026-03-01 00:02:32.358881 | orchestrator | + block_device { 2026-03-01 00:02:32.358888 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.358895 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.358906 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.358913 | orchestrator | + multiattach = false 2026-03-01 00:02:32.358921 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.358928 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.358940 | orchestrator | } 2026-03-01 00:02:32.358948 | orchestrator | 2026-03-01 00:02:32.358955 | orchestrator | + network { 2026-03-01 00:02:32.358962 | orchestrator | + access_network = false 2026-03-01 00:02:32.358969 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.358976 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.358984 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.358991 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.358998 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.359005 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.359012 | orchestrator | } 2026-03-01 00:02:32.359020 | orchestrator | } 2026-03-01 00:02:32.359027 | orchestrator | 2026-03-01 00:02:32.359034 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-01 00:02:32.359041 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-01 00:02:32.359049 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.359056 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.359063 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.359070 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.359077 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.359084 | orchestrator | + config_drive = true 2026-03-01 00:02:32.359092 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.359099 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.359106 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-01 00:02:32.359113 | orchestrator | + force_delete = false 2026-03-01 00:02:32.359120 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.359127 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.359135 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.359142 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.359149 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.359156 | orchestrator | + name = "testbed-node-0" 2026-03-01 00:02:32.359163 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.359170 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.359177 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.359184 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.359192 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.359199 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-01 00:02:32.359206 | orchestrator | 2026-03-01 00:02:32.359213 | orchestrator | + block_device { 2026-03-01 00:02:32.359221 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.359228 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.359235 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.359242 | orchestrator | + multiattach = false 2026-03-01 00:02:32.359249 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.359256 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.359264 | orchestrator | } 2026-03-01 00:02:32.359271 | orchestrator | 2026-03-01 00:02:32.359278 | orchestrator | + network { 2026-03-01 00:02:32.359285 | orchestrator | + access_network = false 2026-03-01 00:02:32.359292 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.359300 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.359307 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.359314 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.359321 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.359328 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.359335 | orchestrator | } 2026-03-01 00:02:32.359343 | orchestrator | } 2026-03-01 00:02:32.359353 | orchestrator | 2026-03-01 00:02:32.359361 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-01 00:02:32.359368 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-01 00:02:32.359375 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.359387 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.359394 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.359401 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.359409 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.359416 | orchestrator | + config_drive = true 2026-03-01 00:02:32.359423 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.359430 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.359458 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-01 00:02:32.359466 | orchestrator | + force_delete = false 2026-03-01 00:02:32.359473 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.359480 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.359488 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.359495 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.359502 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.359509 | orchestrator | + name = "testbed-node-1" 2026-03-01 00:02:32.359517 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.359524 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.359531 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.359538 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.359546 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.359553 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-01 00:02:32.359560 | orchestrator | 2026-03-01 00:02:32.359568 | orchestrator | + block_device { 2026-03-01 00:02:32.359575 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.359582 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.359590 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.359597 | orchestrator | + multiattach = false 2026-03-01 00:02:32.359604 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.359611 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.359619 | orchestrator | } 2026-03-01 00:02:32.359626 | orchestrator | 2026-03-01 00:02:32.359633 | orchestrator | + network { 2026-03-01 00:02:32.359641 | orchestrator | + access_network = false 2026-03-01 00:02:32.359648 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.359655 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.359663 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.359670 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.359677 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.359684 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.359692 | orchestrator | } 2026-03-01 00:02:32.359699 | orchestrator | } 2026-03-01 00:02:32.359706 | orchestrator | 2026-03-01 00:02:32.359714 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-01 00:02:32.359721 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-01 00:02:32.359728 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.359736 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.359743 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.359751 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.359762 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.359769 | orchestrator | + config_drive = true 2026-03-01 00:02:32.359777 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.359784 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.359791 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-01 00:02:32.359798 | orchestrator | + force_delete = false 2026-03-01 00:02:32.359806 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.359813 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.359820 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.359833 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.359840 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.359847 | orchestrator | + name = "testbed-node-2" 2026-03-01 00:02:32.359855 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.359862 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.359869 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.359876 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.359883 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.359890 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-01 00:02:32.359898 | orchestrator | 2026-03-01 00:02:32.359905 | orchestrator | + block_device { 2026-03-01 00:02:32.359912 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.359919 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.359927 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.359934 | orchestrator | + multiattach = false 2026-03-01 00:02:32.359941 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.359948 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.359955 | orchestrator | } 2026-03-01 00:02:32.359963 | orchestrator | 2026-03-01 00:02:32.359970 | orchestrator | + network { 2026-03-01 00:02:32.359977 | orchestrator | + access_network = false 2026-03-01 00:02:32.359984 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.359992 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.359999 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.360006 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.360013 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.360020 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.360028 | orchestrator | } 2026-03-01 00:02:32.360035 | orchestrator | } 2026-03-01 00:02:32.360046 | orchestrator | 2026-03-01 00:02:32.360054 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-01 00:02:32.360061 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-01 00:02:32.360068 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.360076 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.360083 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.360090 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.360097 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.360104 | orchestrator | + config_drive = true 2026-03-01 00:02:32.360111 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.360118 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.360126 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-01 00:02:32.360133 | orchestrator | + force_delete = false 2026-03-01 00:02:32.360140 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.360147 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.360154 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.360162 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.360169 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.360176 | orchestrator | + name = "testbed-node-3" 2026-03-01 00:02:32.360183 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.360190 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.360198 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.360205 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.360212 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.360220 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-01 00:02:32.360227 | orchestrator | 2026-03-01 00:02:32.360234 | orchestrator | + block_device { 2026-03-01 00:02:32.360245 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.360252 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.360260 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.360271 | orchestrator | + multiattach = false 2026-03-01 00:02:32.360279 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.360286 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.360293 | orchestrator | } 2026-03-01 00:02:32.360300 | orchestrator | 2026-03-01 00:02:32.360307 | orchestrator | + network { 2026-03-01 00:02:32.360315 | orchestrator | + access_network = false 2026-03-01 00:02:32.360322 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.360329 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.360336 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.360343 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.360350 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.360358 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.360365 | orchestrator | } 2026-03-01 00:02:32.360372 | orchestrator | } 2026-03-01 00:02:32.360379 | orchestrator | 2026-03-01 00:02:32.360387 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-01 00:02:32.360394 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-01 00:02:32.360401 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.360409 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.360416 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.360423 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.360430 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.360450 | orchestrator | + config_drive = true 2026-03-01 00:02:32.360457 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.360465 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.360472 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-01 00:02:32.360479 | orchestrator | + force_delete = false 2026-03-01 00:02:32.360486 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.360494 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.360501 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.360508 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.360515 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.360522 | orchestrator | + name = "testbed-node-4" 2026-03-01 00:02:32.360530 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.360537 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.360544 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.360551 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.360558 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.360566 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-01 00:02:32.360573 | orchestrator | 2026-03-01 00:02:32.360580 | orchestrator | + block_device { 2026-03-01 00:02:32.360587 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.360595 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.360602 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.360609 | orchestrator | + multiattach = false 2026-03-01 00:02:32.360616 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.360623 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.360631 | orchestrator | } 2026-03-01 00:02:32.360638 | orchestrator | 2026-03-01 00:02:32.360645 | orchestrator | + network { 2026-03-01 00:02:32.360653 | orchestrator | + access_network = false 2026-03-01 00:02:32.360660 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.360667 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.360674 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.360682 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.360689 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.360696 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.360703 | orchestrator | } 2026-03-01 00:02:32.360710 | orchestrator | } 2026-03-01 00:02:32.360722 | orchestrator | 2026-03-01 00:02:32.360729 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-01 00:02:32.360737 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-01 00:02:32.360744 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-01 00:02:32.360751 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-01 00:02:32.360758 | orchestrator | + all_metadata = (known after apply) 2026-03-01 00:02:32.360765 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.360773 | orchestrator | + availability_zone = "nova" 2026-03-01 00:02:32.360780 | orchestrator | + config_drive = true 2026-03-01 00:02:32.360793 | orchestrator | + created = (known after apply) 2026-03-01 00:02:32.360801 | orchestrator | + flavor_id = (known after apply) 2026-03-01 00:02:32.360808 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-01 00:02:32.360815 | orchestrator | + force_delete = false 2026-03-01 00:02:32.360826 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-01 00:02:32.360833 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.360840 | orchestrator | + image_id = (known after apply) 2026-03-01 00:02:32.360847 | orchestrator | + image_name = (known after apply) 2026-03-01 00:02:32.360855 | orchestrator | + key_pair = "testbed" 2026-03-01 00:02:32.360862 | orchestrator | + name = "testbed-node-5" 2026-03-01 00:02:32.360869 | orchestrator | + power_state = "active" 2026-03-01 00:02:32.360876 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.360883 | orchestrator | + security_groups = (known after apply) 2026-03-01 00:02:32.360890 | orchestrator | + stop_before_destroy = false 2026-03-01 00:02:32.360897 | orchestrator | + updated = (known after apply) 2026-03-01 00:02:32.360905 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-01 00:02:32.360912 | orchestrator | 2026-03-01 00:02:32.360919 | orchestrator | + block_device { 2026-03-01 00:02:32.360926 | orchestrator | + boot_index = 0 2026-03-01 00:02:32.360934 | orchestrator | + delete_on_termination = false 2026-03-01 00:02:32.360941 | orchestrator | + destination_type = "volume" 2026-03-01 00:02:32.360948 | orchestrator | + multiattach = false 2026-03-01 00:02:32.360955 | orchestrator | + source_type = "volume" 2026-03-01 00:02:32.360962 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.360970 | orchestrator | } 2026-03-01 00:02:32.360977 | orchestrator | 2026-03-01 00:02:32.360984 | orchestrator | + network { 2026-03-01 00:02:32.360991 | orchestrator | + access_network = false 2026-03-01 00:02:32.360999 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-01 00:02:32.361006 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-01 00:02:32.361013 | orchestrator | + mac = (known after apply) 2026-03-01 00:02:32.361020 | orchestrator | + name = (known after apply) 2026-03-01 00:02:32.361028 | orchestrator | + port = (known after apply) 2026-03-01 00:02:32.361035 | orchestrator | + uuid = (known after apply) 2026-03-01 00:02:32.361042 | orchestrator | } 2026-03-01 00:02:32.361049 | orchestrator | } 2026-03-01 00:02:32.361057 | orchestrator | 2026-03-01 00:02:32.361064 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-01 00:02:32.361071 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-01 00:02:32.361078 | orchestrator | + fingerprint = (known after apply) 2026-03-01 00:02:32.361086 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361093 | orchestrator | + name = "testbed" 2026-03-01 00:02:32.361100 | orchestrator | + private_key = (sensitive value) 2026-03-01 00:02:32.361108 | orchestrator | + public_key = (known after apply) 2026-03-01 00:02:32.361115 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361122 | orchestrator | + user_id = (known after apply) 2026-03-01 00:02:32.361129 | orchestrator | } 2026-03-01 00:02:32.361137 | orchestrator | 2026-03-01 00:02:32.361144 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-01 00:02:32.361151 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361163 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361170 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361177 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361185 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361192 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361199 | orchestrator | } 2026-03-01 00:02:32.361206 | orchestrator | 2026-03-01 00:02:32.361214 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-01 00:02:32.361221 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361228 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361236 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361243 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361250 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361257 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361265 | orchestrator | } 2026-03-01 00:02:32.361272 | orchestrator | 2026-03-01 00:02:32.361279 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-01 00:02:32.361287 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361294 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361301 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361308 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361315 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361322 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361330 | orchestrator | } 2026-03-01 00:02:32.361337 | orchestrator | 2026-03-01 00:02:32.361344 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-01 00:02:32.361352 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361359 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361366 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361373 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361380 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361387 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361395 | orchestrator | } 2026-03-01 00:02:32.361402 | orchestrator | 2026-03-01 00:02:32.361410 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-01 00:02:32.361417 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361424 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361431 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361459 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361470 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361478 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361485 | orchestrator | } 2026-03-01 00:02:32.361493 | orchestrator | 2026-03-01 00:02:32.361500 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-01 00:02:32.361507 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361515 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361522 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361529 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361537 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361544 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361551 | orchestrator | } 2026-03-01 00:02:32.361558 | orchestrator | 2026-03-01 00:02:32.361570 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-01 00:02:32.361578 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361585 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361593 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361600 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361607 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361619 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361627 | orchestrator | } 2026-03-01 00:02:32.361634 | orchestrator | 2026-03-01 00:02:32.361641 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-01 00:02:32.361649 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361656 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361663 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361670 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361678 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361685 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361692 | orchestrator | } 2026-03-01 00:02:32.361700 | orchestrator | 2026-03-01 00:02:32.361707 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-01 00:02:32.361714 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-01 00:02:32.361722 | orchestrator | + device = (known after apply) 2026-03-01 00:02:32.361729 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361736 | orchestrator | + instance_id = (known after apply) 2026-03-01 00:02:32.361743 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361751 | orchestrator | + volume_id = (known after apply) 2026-03-01 00:02:32.361758 | orchestrator | } 2026-03-01 00:02:32.361765 | orchestrator | 2026-03-01 00:02:32.361773 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-01 00:02:32.361781 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-01 00:02:32.361788 | orchestrator | + fixed_ip = (known after apply) 2026-03-01 00:02:32.361795 | orchestrator | + floating_ip = (known after apply) 2026-03-01 00:02:32.361803 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361810 | orchestrator | + port_id = (known after apply) 2026-03-01 00:02:32.361817 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361824 | orchestrator | } 2026-03-01 00:02:32.361832 | orchestrator | 2026-03-01 00:02:32.361839 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-01 00:02:32.361846 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-01 00:02:32.361854 | orchestrator | + address = (known after apply) 2026-03-01 00:02:32.361861 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.361868 | orchestrator | + dns_domain = (known after apply) 2026-03-01 00:02:32.361875 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.361883 | orchestrator | + fixed_ip = (known after apply) 2026-03-01 00:02:32.361890 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.361897 | orchestrator | + pool = "public" 2026-03-01 00:02:32.361905 | orchestrator | + port_id = (known after apply) 2026-03-01 00:02:32.361912 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.361919 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.361926 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.361934 | orchestrator | } 2026-03-01 00:02:32.361941 | orchestrator | 2026-03-01 00:02:32.361948 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-01 00:02:32.361956 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-01 00:02:32.361963 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.361970 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.361977 | orchestrator | + availability_zone_hints = [ 2026-03-01 00:02:32.361985 | orchestrator | + "nova", 2026-03-01 00:02:32.361992 | orchestrator | ] 2026-03-01 00:02:32.361999 | orchestrator | + dns_domain = (known after apply) 2026-03-01 00:02:32.362007 | orchestrator | + external = (known after apply) 2026-03-01 00:02:32.362041 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.362050 | orchestrator | + mtu = (known after apply) 2026-03-01 00:02:32.362058 | orchestrator | + name = "net-testbed-management" 2026-03-01 00:02:32.362065 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.362077 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.362085 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.362092 | orchestrator | + shared = (known after apply) 2026-03-01 00:02:32.362099 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.362106 | orchestrator | + transparent_vlan = (known after apply) 2026-03-01 00:02:32.362114 | orchestrator | 2026-03-01 00:02:32.362121 | orchestrator | + segments (known after apply) 2026-03-01 00:02:32.362129 | orchestrator | } 2026-03-01 00:02:32.362136 | orchestrator | 2026-03-01 00:02:32.362143 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-01 00:02:32.362151 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-01 00:02:32.362158 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.362165 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.362173 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.362184 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.362191 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.362199 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.362206 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.362213 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.362221 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.362228 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.362235 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.362242 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.362250 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.362257 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.362264 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.362271 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.362279 | orchestrator | 2026-03-01 00:02:32.362286 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.362293 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.362305 | orchestrator | } 2026-03-01 00:02:32.362313 | orchestrator | 2026-03-01 00:02:32.362320 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.362328 | orchestrator | 2026-03-01 00:02:32.362335 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.362342 | orchestrator | + ip_address = "192.168.16.5" 2026-03-01 00:02:32.362350 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.362357 | orchestrator | } 2026-03-01 00:02:32.362364 | orchestrator | } 2026-03-01 00:02:32.362372 | orchestrator | 2026-03-01 00:02:32.362379 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-01 00:02:32.362386 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-01 00:02:32.362393 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.362401 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.362408 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.362415 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.362423 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.362430 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.362593 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.362604 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.362611 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.362619 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.362626 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.362633 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.362641 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.362648 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.362664 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.362671 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.362678 | orchestrator | 2026-03-01 00:02:32.362686 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.362693 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-01 00:02:32.362700 | orchestrator | } 2026-03-01 00:02:32.362708 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.362715 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.362722 | orchestrator | } 2026-03-01 00:02:32.362729 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.362737 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-01 00:02:32.362744 | orchestrator | } 2026-03-01 00:02:32.362751 | orchestrator | 2026-03-01 00:02:32.362758 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.362766 | orchestrator | 2026-03-01 00:02:32.362773 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.362780 | orchestrator | + ip_address = "192.168.16.10" 2026-03-01 00:02:32.362787 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.362795 | orchestrator | } 2026-03-01 00:02:32.362802 | orchestrator | } 2026-03-01 00:02:32.362809 | orchestrator | 2026-03-01 00:02:32.362816 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-01 00:02:32.362823 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-01 00:02:32.362831 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.362838 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.362845 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.362853 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.362860 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.362867 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.362874 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.362882 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.362889 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.362896 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.362903 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.362910 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.362918 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.362925 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.362932 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.362939 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.362946 | orchestrator | 2026-03-01 00:02:32.362954 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.362961 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-01 00:02:32.362968 | orchestrator | } 2026-03-01 00:02:32.362975 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.362983 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.362990 | orchestrator | } 2026-03-01 00:02:32.362997 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363003 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-01 00:02:32.363010 | orchestrator | } 2026-03-01 00:02:32.363017 | orchestrator | 2026-03-01 00:02:32.363023 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.363030 | orchestrator | 2026-03-01 00:02:32.363037 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.363043 | orchestrator | + ip_address = "192.168.16.11" 2026-03-01 00:02:32.363050 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.363057 | orchestrator | } 2026-03-01 00:02:32.363063 | orchestrator | } 2026-03-01 00:02:32.363070 | orchestrator | 2026-03-01 00:02:32.363077 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-01 00:02:32.363083 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-01 00:02:32.363090 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.363097 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.363104 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.363111 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.363122 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.363129 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.363136 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.363143 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.363154 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.363161 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.363168 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.363174 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.363181 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.363188 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.363194 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.363201 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.363208 | orchestrator | 2026-03-01 00:02:32.363214 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363221 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-01 00:02:32.363228 | orchestrator | } 2026-03-01 00:02:32.363234 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363245 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.363252 | orchestrator | } 2026-03-01 00:02:32.363259 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363266 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-01 00:02:32.363272 | orchestrator | } 2026-03-01 00:02:32.363279 | orchestrator | 2026-03-01 00:02:32.363286 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.363292 | orchestrator | 2026-03-01 00:02:32.363299 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.363306 | orchestrator | + ip_address = "192.168.16.12" 2026-03-01 00:02:32.363312 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.363319 | orchestrator | } 2026-03-01 00:02:32.363325 | orchestrator | } 2026-03-01 00:02:32.363332 | orchestrator | 2026-03-01 00:02:32.363339 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-01 00:02:32.363346 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-01 00:02:32.363352 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.363359 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.363366 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.363372 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.363379 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.363386 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.363392 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.363399 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.363406 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.363412 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.363419 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.363426 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.363432 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.363451 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.363458 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.363464 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.363471 | orchestrator | 2026-03-01 00:02:32.363478 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363485 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-01 00:02:32.363492 | orchestrator | } 2026-03-01 00:02:32.363498 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363505 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.363512 | orchestrator | } 2026-03-01 00:02:32.363518 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363525 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-01 00:02:32.363532 | orchestrator | } 2026-03-01 00:02:32.363539 | orchestrator | 2026-03-01 00:02:32.363550 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.363557 | orchestrator | 2026-03-01 00:02:32.363563 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.363570 | orchestrator | + ip_address = "192.168.16.13" 2026-03-01 00:02:32.363577 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.363584 | orchestrator | } 2026-03-01 00:02:32.363590 | orchestrator | } 2026-03-01 00:02:32.363597 | orchestrator | 2026-03-01 00:02:32.363604 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-01 00:02:32.363611 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-01 00:02:32.363617 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.363624 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.363631 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.363638 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.363644 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.363651 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.363658 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.363665 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.363672 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.363678 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.363685 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.363692 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.363698 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.363705 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.363712 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.363719 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.363726 | orchestrator | 2026-03-01 00:02:32.363733 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363739 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-01 00:02:32.363746 | orchestrator | } 2026-03-01 00:02:32.363753 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363760 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.363767 | orchestrator | } 2026-03-01 00:02:32.363773 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.363780 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-01 00:02:32.363787 | orchestrator | } 2026-03-01 00:02:32.363794 | orchestrator | 2026-03-01 00:02:32.363800 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.363807 | orchestrator | 2026-03-01 00:02:32.363830 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.363837 | orchestrator | + ip_address = "192.168.16.14" 2026-03-01 00:02:32.363844 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.363851 | orchestrator | } 2026-03-01 00:02:32.363858 | orchestrator | } 2026-03-01 00:02:32.363864 | orchestrator | 2026-03-01 00:02:32.363871 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-01 00:02:32.363878 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-01 00:02:32.363885 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.363892 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-01 00:02:32.363898 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-01 00:02:32.363905 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.363912 | orchestrator | + device_id = (known after apply) 2026-03-01 00:02:32.363919 | orchestrator | + device_owner = (known after apply) 2026-03-01 00:02:32.363926 | orchestrator | + dns_assignment = (known after apply) 2026-03-01 00:02:32.363932 | orchestrator | + dns_name = (known after apply) 2026-03-01 00:02:32.363939 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.363946 | orchestrator | + mac_address = (known after apply) 2026-03-01 00:02:32.363952 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.363959 | orchestrator | + port_security_enabled = (known after apply) 2026-03-01 00:02:32.363966 | orchestrator | + qos_policy_id = (known after apply) 2026-03-01 00:02:32.363977 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.363983 | orchestrator | + security_group_ids = (known after apply) 2026-03-01 00:02:32.363994 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364001 | orchestrator | 2026-03-01 00:02:32.364007 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.364014 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-01 00:02:32.364021 | orchestrator | } 2026-03-01 00:02:32.364028 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.364034 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-01 00:02:32.364041 | orchestrator | } 2026-03-01 00:02:32.364048 | orchestrator | + allowed_address_pairs { 2026-03-01 00:02:32.364054 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-01 00:02:32.364061 | orchestrator | } 2026-03-01 00:02:32.364068 | orchestrator | 2026-03-01 00:02:32.364079 | orchestrator | + binding (known after apply) 2026-03-01 00:02:32.364086 | orchestrator | 2026-03-01 00:02:32.364092 | orchestrator | + fixed_ip { 2026-03-01 00:02:32.364099 | orchestrator | + ip_address = "192.168.16.15" 2026-03-01 00:02:32.364106 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.364112 | orchestrator | } 2026-03-01 00:02:32.364119 | orchestrator | } 2026-03-01 00:02:32.364126 | orchestrator | 2026-03-01 00:02:32.364132 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-01 00:02:32.364139 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-01 00:02:32.364146 | orchestrator | + force_destroy = false 2026-03-01 00:02:32.364152 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364159 | orchestrator | + port_id = (known after apply) 2026-03-01 00:02:32.364166 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364173 | orchestrator | + router_id = (known after apply) 2026-03-01 00:02:32.364179 | orchestrator | + subnet_id = (known after apply) 2026-03-01 00:02:32.364186 | orchestrator | } 2026-03-01 00:02:32.364193 | orchestrator | 2026-03-01 00:02:32.364200 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-01 00:02:32.364207 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-01 00:02:32.364213 | orchestrator | + admin_state_up = (known after apply) 2026-03-01 00:02:32.364220 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.364226 | orchestrator | + availability_zone_hints = [ 2026-03-01 00:02:32.364233 | orchestrator | + "nova", 2026-03-01 00:02:32.364240 | orchestrator | ] 2026-03-01 00:02:32.364247 | orchestrator | + distributed = (known after apply) 2026-03-01 00:02:32.364253 | orchestrator | + enable_snat = (known after apply) 2026-03-01 00:02:32.364260 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-01 00:02:32.364267 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-01 00:02:32.364273 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364280 | orchestrator | + name = "testbed" 2026-03-01 00:02:32.364287 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364293 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364300 | orchestrator | 2026-03-01 00:02:32.364307 | orchestrator | + external_fixed_ip (known after apply) 2026-03-01 00:02:32.364314 | orchestrator | } 2026-03-01 00:02:32.364321 | orchestrator | 2026-03-01 00:02:32.364327 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-01 00:02:32.364334 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-01 00:02:32.364341 | orchestrator | + description = "ssh" 2026-03-01 00:02:32.364348 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.364354 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.364361 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364368 | orchestrator | + port_range_max = 22 2026-03-01 00:02:32.364375 | orchestrator | + port_range_min = 22 2026-03-01 00:02:32.364381 | orchestrator | + protocol = "tcp" 2026-03-01 00:02:32.364388 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364399 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.364406 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.364412 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.364419 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.364425 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364432 | orchestrator | } 2026-03-01 00:02:32.364474 | orchestrator | 2026-03-01 00:02:32.364482 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-01 00:02:32.364489 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-01 00:02:32.364496 | orchestrator | + description = "wireguard" 2026-03-01 00:02:32.364503 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.364509 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.364516 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364523 | orchestrator | + port_range_max = 51820 2026-03-01 00:02:32.364529 | orchestrator | + port_range_min = 51820 2026-03-01 00:02:32.364536 | orchestrator | + protocol = "udp" 2026-03-01 00:02:32.364543 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364549 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.364556 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.364563 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.364569 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.364576 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364583 | orchestrator | } 2026-03-01 00:02:32.364590 | orchestrator | 2026-03-01 00:02:32.364596 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-01 00:02:32.364603 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-01 00:02:32.364610 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.364617 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.364623 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364630 | orchestrator | + protocol = "tcp" 2026-03-01 00:02:32.364637 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364643 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.364650 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.364657 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-01 00:02:32.364663 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.364670 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364677 | orchestrator | } 2026-03-01 00:02:32.364684 | orchestrator | 2026-03-01 00:02:32.364690 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-01 00:02:32.364697 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-01 00:02:32.364711 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.364718 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.364725 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364732 | orchestrator | + protocol = "udp" 2026-03-01 00:02:32.364738 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364745 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.364752 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.364759 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-01 00:02:32.364765 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.364772 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364779 | orchestrator | } 2026-03-01 00:02:32.364786 | orchestrator | 2026-03-01 00:02:32.364792 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-01 00:02:32.364804 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-01 00:02:32.364810 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.364817 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.364824 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364831 | orchestrator | + protocol = "icmp" 2026-03-01 00:02:32.364837 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364844 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.364851 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.364857 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.364864 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.364871 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364877 | orchestrator | } 2026-03-01 00:02:32.364884 | orchestrator | 2026-03-01 00:02:32.364891 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-01 00:02:32.364897 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-01 00:02:32.364904 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.364911 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.364918 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.364924 | orchestrator | + protocol = "tcp" 2026-03-01 00:02:32.364931 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.364938 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.364948 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.364954 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.364960 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.364966 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.364972 | orchestrator | } 2026-03-01 00:02:32.364978 | orchestrator | 2026-03-01 00:02:32.364984 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-01 00:02:32.364989 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-01 00:02:32.364995 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.365001 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.365007 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365013 | orchestrator | + protocol = "udp" 2026-03-01 00:02:32.365019 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.365024 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.365030 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.365036 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.365042 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.365048 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.365053 | orchestrator | } 2026-03-01 00:02:32.365059 | orchestrator | 2026-03-01 00:02:32.365065 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-01 00:02:32.365071 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-01 00:02:32.365077 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.365085 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.365091 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365097 | orchestrator | + protocol = "icmp" 2026-03-01 00:02:32.365103 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.365109 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.365115 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.365121 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.365127 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.365132 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.365142 | orchestrator | } 2026-03-01 00:02:32.365148 | orchestrator | 2026-03-01 00:02:32.365154 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-01 00:02:32.365160 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-01 00:02:32.365165 | orchestrator | + description = "vrrp" 2026-03-01 00:02:32.365171 | orchestrator | + direction = "ingress" 2026-03-01 00:02:32.365177 | orchestrator | + ethertype = "IPv4" 2026-03-01 00:02:32.365183 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365189 | orchestrator | + protocol = "112" 2026-03-01 00:02:32.365195 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.365200 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-01 00:02:32.365206 | orchestrator | + remote_group_id = (known after apply) 2026-03-01 00:02:32.365212 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-01 00:02:32.365218 | orchestrator | + security_group_id = (known after apply) 2026-03-01 00:02:32.365223 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.365229 | orchestrator | } 2026-03-01 00:02:32.365235 | orchestrator | 2026-03-01 00:02:32.365241 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-01 00:02:32.365247 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-01 00:02:32.365253 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.365262 | orchestrator | + description = "management security group" 2026-03-01 00:02:32.365268 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365274 | orchestrator | + name = "testbed-management" 2026-03-01 00:02:32.365280 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.365285 | orchestrator | + stateful = (known after apply) 2026-03-01 00:02:32.365291 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.365297 | orchestrator | } 2026-03-01 00:02:32.365303 | orchestrator | 2026-03-01 00:02:32.365309 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-01 00:02:32.365315 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-01 00:02:32.365321 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.365326 | orchestrator | + description = "node security group" 2026-03-01 00:02:32.365332 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365338 | orchestrator | + name = "testbed-node" 2026-03-01 00:02:32.365344 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.365350 | orchestrator | + stateful = (known after apply) 2026-03-01 00:02:32.365355 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.365361 | orchestrator | } 2026-03-01 00:02:32.365367 | orchestrator | 2026-03-01 00:02:32.365373 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-01 00:02:32.365379 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-01 00:02:32.365385 | orchestrator | + all_tags = (known after apply) 2026-03-01 00:02:32.365390 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-01 00:02:32.365396 | orchestrator | + dns_nameservers = [ 2026-03-01 00:02:32.365402 | orchestrator | + "8.8.8.8", 2026-03-01 00:02:32.365408 | orchestrator | + "9.9.9.9", 2026-03-01 00:02:32.365414 | orchestrator | ] 2026-03-01 00:02:32.365420 | orchestrator | + enable_dhcp = true 2026-03-01 00:02:32.365426 | orchestrator | + gateway_ip = (known after apply) 2026-03-01 00:02:32.365432 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365449 | orchestrator | + ip_version = 4 2026-03-01 00:02:32.365455 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-01 00:02:32.365461 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-01 00:02:32.365467 | orchestrator | + name = "subnet-testbed-management" 2026-03-01 00:02:32.365473 | orchestrator | + network_id = (known after apply) 2026-03-01 00:02:32.365479 | orchestrator | + no_gateway = false 2026-03-01 00:02:32.365485 | orchestrator | + region = (known after apply) 2026-03-01 00:02:32.365490 | orchestrator | + service_types = (known after apply) 2026-03-01 00:02:32.365500 | orchestrator | + tenant_id = (known after apply) 2026-03-01 00:02:32.365506 | orchestrator | 2026-03-01 00:02:32.365512 | orchestrator | + allocation_pool { 2026-03-01 00:02:32.365517 | orchestrator | + end = "192.168.31.250" 2026-03-01 00:02:32.365523 | orchestrator | + start = "192.168.31.200" 2026-03-01 00:02:32.365529 | orchestrator | } 2026-03-01 00:02:32.365535 | orchestrator | } 2026-03-01 00:02:32.365541 | orchestrator | 2026-03-01 00:02:32.365547 | orchestrator | # terraform_data.image will be created 2026-03-01 00:02:32.365553 | orchestrator | + resource "terraform_data" "image" { 2026-03-01 00:02:32.365559 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365564 | orchestrator | + input = "Ubuntu 24.04" 2026-03-01 00:02:32.365570 | orchestrator | + output = (known after apply) 2026-03-01 00:02:32.365576 | orchestrator | } 2026-03-01 00:02:32.365582 | orchestrator | 2026-03-01 00:02:32.365588 | orchestrator | # terraform_data.image_node will be created 2026-03-01 00:02:32.365594 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-01 00:02:32.365599 | orchestrator | + id = (known after apply) 2026-03-01 00:02:32.365605 | orchestrator | + input = "Ubuntu 24.04" 2026-03-01 00:02:32.365611 | orchestrator | + output = (known after apply) 2026-03-01 00:02:32.365617 | orchestrator | } 2026-03-01 00:02:32.365623 | orchestrator | 2026-03-01 00:02:32.365628 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-01 00:02:32.365634 | orchestrator | 2026-03-01 00:02:32.365640 | orchestrator | Changes to Outputs: 2026-03-01 00:02:32.365646 | orchestrator | + manager_address = (sensitive value) 2026-03-01 00:02:32.365652 | orchestrator | + private_key = (sensitive value) 2026-03-01 00:02:32.623933 | orchestrator | terraform_data.image_node: Creating... 2026-03-01 00:02:32.624422 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e58d34ec-2728-4f8f-705a-7246313237bb] 2026-03-01 00:02:32.624565 | orchestrator | terraform_data.image: Creating... 2026-03-01 00:02:32.625527 | orchestrator | terraform_data.image: Creation complete after 0s [id=45ecdd28-6114-7f94-2d69-c95f3a9769a8] 2026-03-01 00:02:32.640056 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-01 00:02:32.647051 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-01 00:02:32.647831 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-01 00:02:32.649902 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-01 00:02:32.652102 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-01 00:02:32.653108 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-01 00:02:32.653261 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-01 00:02:32.654341 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-01 00:02:32.655928 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-01 00:02:32.658963 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-01 00:02:33.087922 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-01 00:02:33.097480 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-01 00:02:33.114956 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-01 00:02:33.122716 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-01 00:02:33.168579 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-01 00:02:33.175644 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-01 00:02:33.698066 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=f4a58ae6-0f94-44df-b0c3-e88493aa7a3c] 2026-03-01 00:02:33.709387 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-01 00:02:36.300956 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=75e82ebc-a155-450e-9812-4025914dfeb7] 2026-03-01 00:02:36.308061 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=3ecd9c37-f666-48da-b9e6-5062929e61fa] 2026-03-01 00:02:36.320099 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=0950a1db-ab80-47bb-a3df-92529f49175c] 2026-03-01 00:02:36.320186 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-01 00:02:36.321338 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-01 00:02:36.326176 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=13610e01-1185-4ea8-85ed-961cbe272389] 2026-03-01 00:02:36.326951 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-01 00:02:36.330487 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-01 00:02:36.345950 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=fa955766-0e66-4eff-90a7-dd2f9191ad17] 2026-03-01 00:02:36.352178 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-01 00:02:36.373208 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=13ef5d91-70cf-4b91-a3c5-d7eedb39bef0] 2026-03-01 00:02:36.382217 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-01 00:02:36.390477 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6] 2026-03-01 00:02:36.406279 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-01 00:02:36.412533 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=29e105f9fef87abdec50cb36a7602bb851424c8d] 2026-03-01 00:02:36.418502 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=eb2aa366-42c4-4388-b5bb-c244b0993c0c] 2026-03-01 00:02:36.423920 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-01 00:02:36.428510 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-01 00:02:36.434481 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=7119cd8cbc3f7a44b3b6fb78c92eb61a21db23c2] 2026-03-01 00:02:36.445107 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=538fc64d-5c22-41e2-8e6b-45fa8fa82fec] 2026-03-01 00:02:37.086072 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=3261f155-d508-4354-b595-423683351540] 2026-03-01 00:02:37.351741 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=d0d2468a-73e9-4f37-ab82-5eb906a0f481] 2026-03-01 00:02:37.362207 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-01 00:02:39.760897 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=96df6614-cdd9-4e86-8384-63e48cc6d403] 2026-03-01 00:02:39.779012 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=f0c2808e-98cb-491a-8a34-4e9503ad7b60] 2026-03-01 00:02:39.810966 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=8b16b83d-56b3-4b94-b113-6fb31fe8cad7] 2026-03-01 00:02:39.847952 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=5e1112be-30db-4f57-b8d5-3281055496d6] 2026-03-01 00:02:39.868942 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=906633b6-f217-4172-b29a-2cd328ecb060] 2026-03-01 00:02:39.876238 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=e86ac708-d159-4a58-aba3-0d32343dfb5e] 2026-03-01 00:02:40.899577 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=bd9c6287-57bb-465e-96f0-180573cc08ff] 2026-03-01 00:02:41.724952 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-01 00:02:41.725000 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-01 00:02:41.725007 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-01 00:02:41.725014 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=5ad41950-b072-4065-a281-bee2e6547511] 2026-03-01 00:02:41.725020 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-01 00:02:41.725025 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-01 00:02:41.725030 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-01 00:02:41.725052 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-01 00:02:41.725056 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-01 00:02:41.725061 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-01 00:02:41.725065 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-01 00:02:41.725069 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-01 00:02:41.725074 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=8d97e4bc-f0d5-4350-9247-7e95952012e9] 2026-03-01 00:02:41.725078 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-01 00:02:41.872156 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=04bb8cdb-6f68-43c3-a193-6629ac15db3f] 2026-03-01 00:02:41.883215 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-01 00:02:42.618658 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=729ddc19-821c-4544-ad65-f491ffb59dc0] 2026-03-01 00:02:42.623943 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-01 00:02:42.955211 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7e0308f7-bc88-4294-8d85-1af56ad4941e] 2026-03-01 00:02:42.962554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-01 00:02:43.121615 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=273c3cde-72b8-4816-97b0-c1b4fed9c51b] 2026-03-01 00:02:43.131273 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-01 00:02:43.141735 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=bf9b8aea-fa15-48a8-8072-257e580aa88c] 2026-03-01 00:02:43.146815 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-01 00:02:43.199385 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=8174921b-55ec-4478-b77f-dcb9f6fe0bbe] 2026-03-01 00:02:43.207026 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-01 00:02:43.298053 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=0a7ae496-fa93-4e35-b587-aa784a30a9ca] 2026-03-01 00:02:43.303705 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-01 00:02:43.366064 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=cb64d4fc-1e61-4848-a2e9-e4ae17698a14] 2026-03-01 00:02:43.434402 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=0abe6176-ad66-457f-a40f-9e912f2ddc79] 2026-03-01 00:02:43.598040 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=bad02d49-4de7-47c3-97a3-9818fafcd3de] 2026-03-01 00:02:43.807308 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=b3cff9c2-5c60-4bf7-834f-9f584cee0758] 2026-03-01 00:02:44.051095 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=5bbedd79-9a66-40e4-8300-aac2e72b117e] 2026-03-01 00:02:44.246320 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=5b4ca6a9-04e7-4699-a1e3-30d7da2ce64d] 2026-03-01 00:02:44.400630 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=7a027f44-908b-4145-81aa-2294deb1c93c] 2026-03-01 00:02:44.554929 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=7db69ca7-ef11-4499-9fc1-efd47695fa82] 2026-03-01 00:02:44.661949 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=824d71b1-399c-4ebc-98c3-ab9da6429c8e] 2026-03-01 00:02:47.574722 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=b49c7054-97ec-4a73-aabd-654b2cf2c490] 2026-03-01 00:02:47.589589 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-01 00:02:47.610724 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-01 00:02:47.610801 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-01 00:02:47.615716 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-01 00:02:47.624425 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-01 00:02:47.628548 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-01 00:02:47.629757 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-01 00:02:49.786717 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=a96a76c0-1bd4-4557-8e24-2eefede46385] 2026-03-01 00:02:49.794758 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-01 00:02:49.802281 | orchestrator | local_file.inventory: Creating... 2026-03-01 00:02:49.802372 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-01 00:02:49.806762 | orchestrator | local_file.inventory: Creation complete after 0s [id=70c8b296b5ef0e085188b3f46758879c8a51999d] 2026-03-01 00:02:49.806829 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=e767fe884195a390853bccb66cfaa36612532bf8] 2026-03-01 00:02:50.640055 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=a96a76c0-1bd4-4557-8e24-2eefede46385] 2026-03-01 00:02:57.610511 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-01 00:02:57.611622 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-01 00:02:57.616982 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-01 00:02:57.625406 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-01 00:02:57.628993 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-01 00:02:57.631319 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-01 00:03:07.619516 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-01 00:03:07.619637 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-01 00:03:07.619667 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-01 00:03:07.625940 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-01 00:03:07.629178 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-01 00:03:07.632497 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-01 00:03:17.628600 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-01 00:03:17.628706 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-01 00:03:17.628725 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-01 00:03:17.628731 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-01 00:03:17.629793 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-01 00:03:17.633271 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-01 00:03:18.474416 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=688019de-6564-43e9-9220-80e81abf858c] 2026-03-01 00:03:27.637724 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-01 00:03:27.637844 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-01 00:03:27.637870 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-01 00:03:27.637891 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-01 00:03:27.637963 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-01 00:03:37.647079 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-01 00:03:37.647180 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-01 00:03:37.647190 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-01 00:03:37.647197 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-01 00:03:37.647206 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-01 00:03:47.655798 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-03-01 00:03:47.655906 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-01 00:03:47.655913 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-01 00:03:47.655926 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-01 00:03:47.655930 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-03-01 00:03:48.637676 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=ffedd7bf-172b-4b62-9675-9ba72d0c3775] 2026-03-01 00:03:48.791899 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m1s [id=4c86d63a-9da4-4ad2-a531-9a766d4c5ec4] 2026-03-01 00:03:57.656138 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-03-01 00:03:57.656233 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-03-01 00:03:57.656297 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m10s elapsed] 2026-03-01 00:03:58.725955 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m11s [id=7d847ea3-aabc-4854-b0ef-8a99ede7a012] 2026-03-01 00:03:58.908414 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m11s [id=c1b30169-9917-42e9-8667-eac399e042d2] 2026-03-01 00:03:58.928640 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m11s [id=9886215c-8020-4566-a838-5d5d0e43e6e4] 2026-03-01 00:03:58.952445 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-01 00:03:58.953580 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-01 00:03:58.963560 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-01 00:03:58.965473 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-01 00:03:58.967910 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-01 00:03:58.976500 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4762197290466830867] 2026-03-01 00:03:58.976569 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-01 00:03:58.978465 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-01 00:03:58.978560 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-01 00:03:58.978580 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-01 00:03:58.978750 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-01 00:03:59.014257 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-01 00:04:02.398078 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=4c86d63a-9da4-4ad2-a531-9a766d4c5ec4/fa955766-0e66-4eff-90a7-dd2f9191ad17] 2026-03-01 00:04:02.407066 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=c1b30169-9917-42e9-8667-eac399e042d2/0950a1db-ab80-47bb-a3df-92529f49175c] 2026-03-01 00:04:02.429981 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=ffedd7bf-172b-4b62-9675-9ba72d0c3775/eb2aa366-42c4-4388-b5bb-c244b0993c0c] 2026-03-01 00:04:02.432357 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=c1b30169-9917-42e9-8667-eac399e042d2/75e82ebc-a155-450e-9812-4025914dfeb7] 2026-03-01 00:04:02.453659 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=ffedd7bf-172b-4b62-9675-9ba72d0c3775/9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6] 2026-03-01 00:04:02.456003 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=4c86d63a-9da4-4ad2-a531-9a766d4c5ec4/538fc64d-5c22-41e2-8e6b-45fa8fa82fec] 2026-03-01 00:04:08.594808 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=c1b30169-9917-42e9-8667-eac399e042d2/3ecd9c37-f666-48da-b9e6-5062929e61fa] 2026-03-01 00:04:08.609853 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=4c86d63a-9da4-4ad2-a531-9a766d4c5ec4/13ef5d91-70cf-4b91-a3c5-d7eedb39bef0] 2026-03-01 00:04:08.630387 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=ffedd7bf-172b-4b62-9675-9ba72d0c3775/13610e01-1185-4ea8-85ed-961cbe272389] 2026-03-01 00:04:09.015485 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-01 00:04:19.015984 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-01 00:04:19.434726 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=a6f87afc-56ec-4b6c-ae8d-d6349a70802d] 2026-03-01 00:04:24.447026 | orchestrator | 2026-03-01 00:04:24.447301 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-01 00:04:24.447338 | orchestrator | 2026-03-01 00:04:24.447347 | orchestrator | Outputs: 2026-03-01 00:04:24.447356 | orchestrator | 2026-03-01 00:04:24.447360 | orchestrator | manager_address = 2026-03-01 00:04:24.447365 | orchestrator | private_key = 2026-03-01 00:04:24.645530 | orchestrator | ok: Runtime: 0:02:01.585799 2026-03-01 00:04:24.688950 | 2026-03-01 00:04:24.689130 | TASK [Create infrastructure (stable)] 2026-03-01 00:04:25.227582 | orchestrator | skipping: Conditional result was False 2026-03-01 00:04:25.237407 | 2026-03-01 00:04:25.237640 | TASK [Fetch manager address] 2026-03-01 00:04:25.726290 | orchestrator | ok 2026-03-01 00:04:25.733829 | 2026-03-01 00:04:25.734012 | TASK [Set manager_host address] 2026-03-01 00:04:25.811883 | orchestrator | ok 2026-03-01 00:04:25.821535 | 2026-03-01 00:04:25.821659 | LOOP [Update ansible collections] 2026-03-01 00:04:26.959635 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-01 00:04:26.960177 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-01 00:04:26.960254 | orchestrator | Starting galaxy collection install process 2026-03-01 00:04:26.960367 | orchestrator | Process install dependency map 2026-03-01 00:04:26.960412 | orchestrator | Starting collection install process 2026-03-01 00:04:26.960450 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-03-01 00:04:26.960493 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-03-01 00:04:26.960546 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-01 00:04:26.960628 | orchestrator | ok: Item: commons Runtime: 0:00:00.740385 2026-03-01 00:04:28.130080 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-01 00:04:28.130264 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-01 00:04:28.130312 | orchestrator | Starting galaxy collection install process 2026-03-01 00:04:28.130352 | orchestrator | Process install dependency map 2026-03-01 00:04:28.130387 | orchestrator | Starting collection install process 2026-03-01 00:04:28.130422 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-03-01 00:04:28.130454 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-03-01 00:04:28.130484 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-01 00:04:28.130559 | orchestrator | ok: Item: services Runtime: 0:00:00.842790 2026-03-01 00:04:28.146190 | 2026-03-01 00:04:28.146326 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-01 00:04:38.762528 | orchestrator | ok 2026-03-01 00:04:38.770660 | 2026-03-01 00:04:38.770773 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-01 00:05:38.820689 | orchestrator | ok 2026-03-01 00:05:38.834514 | 2026-03-01 00:05:38.834645 | TASK [Fetch manager ssh hostkey] 2026-03-01 00:05:40.436304 | orchestrator | Output suppressed because no_log was given 2026-03-01 00:05:40.459992 | 2026-03-01 00:05:40.460194 | TASK [Get ssh keypair from terraform environment] 2026-03-01 00:05:41.003655 | orchestrator | ok: Runtime: 0:00:00.008222 2026-03-01 00:05:41.016739 | 2026-03-01 00:05:41.016878 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-01 00:05:41.048229 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-01 00:05:41.055536 | 2026-03-01 00:05:41.055651 | TASK [Run manager part 0] 2026-03-01 00:05:42.300394 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-01 00:05:42.364517 | orchestrator | 2026-03-01 00:05:42.364588 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-01 00:05:42.364598 | orchestrator | 2026-03-01 00:05:42.364616 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-01 00:05:44.287672 | orchestrator | ok: [testbed-manager] 2026-03-01 00:05:44.287711 | orchestrator | 2026-03-01 00:05:44.287731 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-01 00:05:44.287743 | orchestrator | 2026-03-01 00:05:44.287754 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:05:46.239155 | orchestrator | ok: [testbed-manager] 2026-03-01 00:05:46.239190 | orchestrator | 2026-03-01 00:05:46.239196 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-01 00:05:46.908663 | orchestrator | ok: [testbed-manager] 2026-03-01 00:05:46.908754 | orchestrator | 2026-03-01 00:05:46.908769 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-01 00:05:46.952453 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:46.952495 | orchestrator | 2026-03-01 00:05:46.952507 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-01 00:05:46.979548 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:46.979585 | orchestrator | 2026-03-01 00:05:46.979592 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-01 00:05:47.007929 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:47.007965 | orchestrator | 2026-03-01 00:05:47.007970 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-01 00:05:47.037645 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:47.037763 | orchestrator | 2026-03-01 00:05:47.037770 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-01 00:05:47.065988 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:47.066053 | orchestrator | 2026-03-01 00:05:47.066064 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-01 00:05:47.097124 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:47.097168 | orchestrator | 2026-03-01 00:05:47.097179 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-01 00:05:47.127675 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:05:47.127720 | orchestrator | 2026-03-01 00:05:47.127731 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-01 00:05:47.827111 | orchestrator | changed: [testbed-manager] 2026-03-01 00:05:47.827147 | orchestrator | 2026-03-01 00:05:47.827153 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-01 00:08:38.055619 | orchestrator | changed: [testbed-manager] 2026-03-01 00:08:38.055681 | orchestrator | 2026-03-01 00:08:38.055691 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-01 00:10:13.215799 | orchestrator | changed: [testbed-manager] 2026-03-01 00:10:13.216009 | orchestrator | 2026-03-01 00:10:13.216029 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-01 00:10:38.722735 | orchestrator | changed: [testbed-manager] 2026-03-01 00:10:38.722805 | orchestrator | 2026-03-01 00:10:38.722826 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-01 00:10:48.180367 | orchestrator | changed: [testbed-manager] 2026-03-01 00:10:48.180545 | orchestrator | 2026-03-01 00:10:48.180563 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-01 00:10:48.231979 | orchestrator | ok: [testbed-manager] 2026-03-01 00:10:48.232081 | orchestrator | 2026-03-01 00:10:48.232106 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-01 00:10:49.041386 | orchestrator | ok: [testbed-manager] 2026-03-01 00:10:49.041496 | orchestrator | 2026-03-01 00:10:49.041516 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-01 00:10:49.804546 | orchestrator | changed: [testbed-manager] 2026-03-01 00:10:49.804623 | orchestrator | 2026-03-01 00:10:49.804638 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-01 00:10:57.362354 | orchestrator | changed: [testbed-manager] 2026-03-01 00:10:57.362400 | orchestrator | 2026-03-01 00:10:57.362423 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-01 00:11:04.623871 | orchestrator | changed: [testbed-manager] 2026-03-01 00:11:04.623977 | orchestrator | 2026-03-01 00:11:04.624005 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-01 00:11:07.021217 | orchestrator | changed: [testbed-manager] 2026-03-01 00:11:07.021315 | orchestrator | 2026-03-01 00:11:07.021334 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-01 00:11:08.588811 | orchestrator | changed: [testbed-manager] 2026-03-01 00:11:08.589732 | orchestrator | 2026-03-01 00:11:08.589767 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-01 00:11:09.655077 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-01 00:11:09.655169 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-01 00:11:09.655187 | orchestrator | 2026-03-01 00:11:09.655201 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-01 00:11:09.696511 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-01 00:11:09.696559 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-01 00:11:09.696565 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-01 00:11:09.696569 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-01 00:11:12.864266 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-01 00:11:12.864330 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-01 00:11:12.864340 | orchestrator | 2026-03-01 00:11:12.864348 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-01 00:11:13.428637 | orchestrator | changed: [testbed-manager] 2026-03-01 00:11:13.428676 | orchestrator | 2026-03-01 00:11:13.428683 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-01 00:16:36.146225 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-01 00:16:36.146278 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-01 00:16:36.146289 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-01 00:16:36.146297 | orchestrator | 2026-03-01 00:16:36.146305 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-01 00:16:38.428190 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-01 00:16:38.428226 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-01 00:16:38.428231 | orchestrator | 2026-03-01 00:16:38.428235 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-01 00:16:38.428240 | orchestrator | 2026-03-01 00:16:38.428244 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:16:39.825319 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:39.825355 | orchestrator | 2026-03-01 00:16:39.825362 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-01 00:16:39.858509 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:39.858540 | orchestrator | 2026-03-01 00:16:39.858546 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-01 00:16:39.918706 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:39.918743 | orchestrator | 2026-03-01 00:16:39.918750 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-01 00:16:40.698759 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:40.698843 | orchestrator | 2026-03-01 00:16:40.698851 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-01 00:16:41.422209 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:41.422250 | orchestrator | 2026-03-01 00:16:41.422256 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-01 00:16:42.785739 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-01 00:16:42.785835 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-01 00:16:42.785846 | orchestrator | 2026-03-01 00:16:42.785870 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-01 00:16:44.163420 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:44.163483 | orchestrator | 2026-03-01 00:16:44.163493 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-01 00:16:45.898603 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:16:45.898690 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-01 00:16:45.898701 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:16:45.898710 | orchestrator | 2026-03-01 00:16:45.898719 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-01 00:16:45.955967 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:45.956048 | orchestrator | 2026-03-01 00:16:45.956061 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-01 00:16:46.024760 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:46.024871 | orchestrator | 2026-03-01 00:16:46.024890 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-01 00:16:46.595370 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:46.595452 | orchestrator | 2026-03-01 00:16:46.595465 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-01 00:16:46.663291 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:46.663398 | orchestrator | 2026-03-01 00:16:46.663422 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-01 00:16:47.550268 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-01 00:16:47.550316 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:47.550325 | orchestrator | 2026-03-01 00:16:47.550333 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-01 00:16:47.580091 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:47.580136 | orchestrator | 2026-03-01 00:16:47.580145 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-01 00:16:47.614190 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:47.614277 | orchestrator | 2026-03-01 00:16:47.614293 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-01 00:16:47.644517 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:47.644595 | orchestrator | 2026-03-01 00:16:47.644611 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-01 00:16:47.716778 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:47.716952 | orchestrator | 2026-03-01 00:16:47.716971 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-01 00:16:48.437851 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:48.437948 | orchestrator | 2026-03-01 00:16:48.437964 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-01 00:16:48.437976 | orchestrator | 2026-03-01 00:16:48.437988 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:16:49.801648 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:49.801743 | orchestrator | 2026-03-01 00:16:49.801758 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-01 00:16:50.774775 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:50.774884 | orchestrator | 2026-03-01 00:16:50.774901 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:16:50.774915 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-01 00:16:50.774926 | orchestrator | 2026-03-01 00:16:50.996415 | orchestrator | ok: Runtime: 0:11:09.389114 2026-03-01 00:16:51.018507 | 2026-03-01 00:16:51.018654 | TASK [Point out that the log in on the manager is now possible] 2026-03-01 00:16:51.066464 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-01 00:16:51.077771 | 2026-03-01 00:16:51.077904 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-01 00:16:51.109922 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-01 00:16:51.116656 | 2026-03-01 00:16:51.116764 | TASK [Run manager part 1 + 2] 2026-03-01 00:16:52.217951 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-01 00:16:52.274694 | orchestrator | 2026-03-01 00:16:52.274739 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-01 00:16:52.274746 | orchestrator | 2026-03-01 00:16:52.274759 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:16:55.187913 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:55.187960 | orchestrator | 2026-03-01 00:16:55.187982 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-01 00:16:55.221433 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:55.221483 | orchestrator | 2026-03-01 00:16:55.221496 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-01 00:16:55.266143 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:55.266201 | orchestrator | 2026-03-01 00:16:55.266212 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-01 00:16:55.312589 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:55.312652 | orchestrator | 2026-03-01 00:16:55.312664 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-01 00:16:55.434104 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:55.434166 | orchestrator | 2026-03-01 00:16:55.434178 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-01 00:16:55.499514 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:55.499571 | orchestrator | 2026-03-01 00:16:55.499582 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-01 00:16:55.553744 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-01 00:16:55.553801 | orchestrator | 2026-03-01 00:16:55.553834 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-01 00:16:56.267761 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:56.267804 | orchestrator | 2026-03-01 00:16:56.267865 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-01 00:16:56.317001 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:16:56.317053 | orchestrator | 2026-03-01 00:16:56.317060 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-01 00:16:57.732242 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:57.732302 | orchestrator | 2026-03-01 00:16:57.732313 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-01 00:16:58.323938 | orchestrator | ok: [testbed-manager] 2026-03-01 00:16:58.323994 | orchestrator | 2026-03-01 00:16:58.324002 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-01 00:16:59.455100 | orchestrator | changed: [testbed-manager] 2026-03-01 00:16:59.455159 | orchestrator | 2026-03-01 00:16:59.455173 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-01 00:17:14.633610 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:14.633720 | orchestrator | 2026-03-01 00:17:14.633737 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-01 00:17:15.296463 | orchestrator | ok: [testbed-manager] 2026-03-01 00:17:15.296553 | orchestrator | 2026-03-01 00:17:15.296573 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-01 00:17:15.349818 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:17:15.349934 | orchestrator | 2026-03-01 00:17:15.349952 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-01 00:17:16.263154 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:16.263374 | orchestrator | 2026-03-01 00:17:16.263392 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-01 00:17:17.152118 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:17.152205 | orchestrator | 2026-03-01 00:17:17.152221 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-01 00:17:17.688770 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:17.688882 | orchestrator | 2026-03-01 00:17:17.688900 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-01 00:17:17.730271 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-01 00:17:17.730336 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-01 00:17:17.730343 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-01 00:17:17.730349 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-01 00:17:19.681554 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:19.681647 | orchestrator | 2026-03-01 00:17:19.681665 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-01 00:17:27.979945 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-01 00:17:27.980062 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-01 00:17:27.980083 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-01 00:17:27.980097 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-01 00:17:27.980117 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-01 00:17:27.980128 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-01 00:17:27.980140 | orchestrator | 2026-03-01 00:17:27.980153 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-01 00:17:29.007950 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:29.007995 | orchestrator | 2026-03-01 00:17:29.008004 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-01 00:17:29.054958 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:17:29.055002 | orchestrator | 2026-03-01 00:17:29.055011 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-01 00:17:32.104954 | orchestrator | changed: [testbed-manager] 2026-03-01 00:17:32.105001 | orchestrator | 2026-03-01 00:17:32.105011 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-01 00:17:32.148148 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:17:32.148190 | orchestrator | 2026-03-01 00:17:32.148200 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-01 00:19:03.908317 | orchestrator | changed: [testbed-manager] 2026-03-01 00:19:03.908411 | orchestrator | 2026-03-01 00:19:03.908426 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-01 00:19:05.094081 | orchestrator | ok: [testbed-manager] 2026-03-01 00:19:05.094141 | orchestrator | 2026-03-01 00:19:05.094156 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:19:05.094169 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-01 00:19:05.094179 | orchestrator | 2026-03-01 00:19:05.257593 | orchestrator | ok: Runtime: 0:02:13.757172 2026-03-01 00:19:05.271316 | 2026-03-01 00:19:05.271567 | TASK [Reboot manager] 2026-03-01 00:19:06.809258 | orchestrator | ok: Runtime: 0:00:00.948484 2026-03-01 00:19:06.824773 | 2026-03-01 00:19:06.824935 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-01 00:19:21.042811 | orchestrator | ok 2026-03-01 00:19:21.054073 | 2026-03-01 00:19:21.054216 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-01 00:20:21.100422 | orchestrator | ok 2026-03-01 00:20:21.110352 | 2026-03-01 00:20:21.110496 | TASK [Deploy manager + bootstrap nodes] 2026-03-01 00:20:23.454929 | orchestrator | 2026-03-01 00:20:23.455199 | orchestrator | # DEPLOY MANAGER 2026-03-01 00:20:23.455227 | orchestrator | 2026-03-01 00:20:23.455243 | orchestrator | + set -e 2026-03-01 00:20:23.455269 | orchestrator | + echo 2026-03-01 00:20:23.455284 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-01 00:20:23.455302 | orchestrator | + echo 2026-03-01 00:20:23.455357 | orchestrator | + cat /opt/manager-vars.sh 2026-03-01 00:20:23.458117 | orchestrator | export NUMBER_OF_NODES=6 2026-03-01 00:20:23.458145 | orchestrator | 2026-03-01 00:20:23.458158 | orchestrator | export CEPH_VERSION=reef 2026-03-01 00:20:23.458172 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-01 00:20:23.458185 | orchestrator | export MANAGER_VERSION=latest 2026-03-01 00:20:23.458208 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-01 00:20:23.458219 | orchestrator | 2026-03-01 00:20:23.458238 | orchestrator | export ARA=false 2026-03-01 00:20:23.458250 | orchestrator | export DEPLOY_MODE=manager 2026-03-01 00:20:23.458268 | orchestrator | export TEMPEST=true 2026-03-01 00:20:23.458280 | orchestrator | export IS_ZUUL=true 2026-03-01 00:20:23.458291 | orchestrator | 2026-03-01 00:20:23.458310 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:20:23.458322 | orchestrator | export EXTERNAL_API=false 2026-03-01 00:20:23.458334 | orchestrator | 2026-03-01 00:20:23.458344 | orchestrator | export IMAGE_USER=ubuntu 2026-03-01 00:20:23.458361 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-01 00:20:23.458372 | orchestrator | 2026-03-01 00:20:23.458383 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-01 00:20:23.458401 | orchestrator | 2026-03-01 00:20:23.458413 | orchestrator | + echo 2026-03-01 00:20:23.458430 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-01 00:20:23.459148 | orchestrator | ++ export INTERACTIVE=false 2026-03-01 00:20:23.459167 | orchestrator | ++ INTERACTIVE=false 2026-03-01 00:20:23.459182 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-01 00:20:23.459196 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-01 00:20:23.459309 | orchestrator | + source /opt/manager-vars.sh 2026-03-01 00:20:23.459327 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-01 00:20:23.459340 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-01 00:20:23.459354 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-01 00:20:23.459367 | orchestrator | ++ CEPH_VERSION=reef 2026-03-01 00:20:23.459390 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-01 00:20:23.459404 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-01 00:20:23.459416 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-01 00:20:23.459429 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-01 00:20:23.459448 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-01 00:20:23.459478 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-01 00:20:23.459490 | orchestrator | ++ export ARA=false 2026-03-01 00:20:23.459501 | orchestrator | ++ ARA=false 2026-03-01 00:20:23.459512 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-01 00:20:23.459523 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-01 00:20:23.459534 | orchestrator | ++ export TEMPEST=true 2026-03-01 00:20:23.459545 | orchestrator | ++ TEMPEST=true 2026-03-01 00:20:23.459556 | orchestrator | ++ export IS_ZUUL=true 2026-03-01 00:20:23.459567 | orchestrator | ++ IS_ZUUL=true 2026-03-01 00:20:23.459578 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:20:23.459588 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:20:23.459600 | orchestrator | ++ export EXTERNAL_API=false 2026-03-01 00:20:23.459610 | orchestrator | ++ EXTERNAL_API=false 2026-03-01 00:20:23.459621 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-01 00:20:23.459632 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-01 00:20:23.459643 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-01 00:20:23.459655 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-01 00:20:23.459666 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-01 00:20:23.459677 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-01 00:20:23.459700 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-01 00:20:23.509259 | orchestrator | + docker version 2026-03-01 00:20:23.615519 | orchestrator | Client: Docker Engine - Community 2026-03-01 00:20:23.615629 | orchestrator | Version: 27.5.1 2026-03-01 00:20:23.615645 | orchestrator | API version: 1.47 2026-03-01 00:20:23.615659 | orchestrator | Go version: go1.22.11 2026-03-01 00:20:23.615670 | orchestrator | Git commit: 9f9e405 2026-03-01 00:20:23.615682 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-01 00:20:23.615694 | orchestrator | OS/Arch: linux/amd64 2026-03-01 00:20:23.615705 | orchestrator | Context: default 2026-03-01 00:20:23.615716 | orchestrator | 2026-03-01 00:20:23.615727 | orchestrator | Server: Docker Engine - Community 2026-03-01 00:20:23.615738 | orchestrator | Engine: 2026-03-01 00:20:23.615749 | orchestrator | Version: 27.5.1 2026-03-01 00:20:23.615761 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-01 00:20:23.615802 | orchestrator | Go version: go1.22.11 2026-03-01 00:20:23.615814 | orchestrator | Git commit: 4c9b3b0 2026-03-01 00:20:23.615825 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-01 00:20:23.615836 | orchestrator | OS/Arch: linux/amd64 2026-03-01 00:20:23.615847 | orchestrator | Experimental: false 2026-03-01 00:20:23.615858 | orchestrator | containerd: 2026-03-01 00:20:23.615869 | orchestrator | Version: v2.2.1 2026-03-01 00:20:23.615880 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-01 00:20:23.615892 | orchestrator | runc: 2026-03-01 00:20:23.615903 | orchestrator | Version: 1.3.4 2026-03-01 00:20:23.615914 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-01 00:20:23.615925 | orchestrator | docker-init: 2026-03-01 00:20:23.615935 | orchestrator | Version: 0.19.0 2026-03-01 00:20:23.615960 | orchestrator | GitCommit: de40ad0 2026-03-01 00:20:23.618298 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-01 00:20:23.627307 | orchestrator | + set -e 2026-03-01 00:20:23.627349 | orchestrator | + source /opt/manager-vars.sh 2026-03-01 00:20:23.627364 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-01 00:20:23.627378 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-01 00:20:23.627396 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-01 00:20:23.627407 | orchestrator | ++ CEPH_VERSION=reef 2026-03-01 00:20:23.627419 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-01 00:20:23.627431 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-01 00:20:23.627442 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-01 00:20:23.627453 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-01 00:20:23.627468 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-01 00:20:23.627479 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-01 00:20:23.627496 | orchestrator | ++ export ARA=false 2026-03-01 00:20:23.627507 | orchestrator | ++ ARA=false 2026-03-01 00:20:23.627524 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-01 00:20:23.627535 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-01 00:20:23.627546 | orchestrator | ++ export TEMPEST=true 2026-03-01 00:20:23.627557 | orchestrator | ++ TEMPEST=true 2026-03-01 00:20:23.627567 | orchestrator | ++ export IS_ZUUL=true 2026-03-01 00:20:23.627578 | orchestrator | ++ IS_ZUUL=true 2026-03-01 00:20:23.627589 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:20:23.627600 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:20:23.627611 | orchestrator | ++ export EXTERNAL_API=false 2026-03-01 00:20:23.627621 | orchestrator | ++ EXTERNAL_API=false 2026-03-01 00:20:23.627632 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-01 00:20:23.627643 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-01 00:20:23.627654 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-01 00:20:23.627664 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-01 00:20:23.627675 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-01 00:20:23.627693 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-01 00:20:23.627708 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-01 00:20:23.627719 | orchestrator | ++ export INTERACTIVE=false 2026-03-01 00:20:23.627730 | orchestrator | ++ INTERACTIVE=false 2026-03-01 00:20:23.627740 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-01 00:20:23.627756 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-01 00:20:23.628023 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-01 00:20:23.628039 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-01 00:20:23.628080 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-01 00:20:23.635284 | orchestrator | + set -e 2026-03-01 00:20:23.635332 | orchestrator | + VERSION=reef 2026-03-01 00:20:23.636246 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-01 00:20:23.642006 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-01 00:20:23.642130 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-01 00:20:23.647465 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-01 00:20:23.654147 | orchestrator | + set -e 2026-03-01 00:20:23.654224 | orchestrator | + VERSION=2024.2 2026-03-01 00:20:23.654955 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-01 00:20:23.658702 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-01 00:20:23.658741 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-01 00:20:23.663977 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-01 00:20:23.664644 | orchestrator | ++ semver latest 7.0.0 2026-03-01 00:20:23.717364 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-01 00:20:23.717454 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-01 00:20:23.717469 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-01 00:20:23.717975 | orchestrator | ++ semver latest 10.0.0-0 2026-03-01 00:20:23.771978 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-01 00:20:23.772500 | orchestrator | ++ semver 2024.2 2025.1 2026-03-01 00:20:23.826581 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-01 00:20:23.826686 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-01 00:20:23.914232 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-01 00:20:23.914916 | orchestrator | + source /opt/venv/bin/activate 2026-03-01 00:20:23.915916 | orchestrator | ++ deactivate nondestructive 2026-03-01 00:20:23.915945 | orchestrator | ++ '[' -n '' ']' 2026-03-01 00:20:23.915958 | orchestrator | ++ '[' -n '' ']' 2026-03-01 00:20:23.915977 | orchestrator | ++ hash -r 2026-03-01 00:20:23.915989 | orchestrator | ++ '[' -n '' ']' 2026-03-01 00:20:23.916000 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-01 00:20:23.916011 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-01 00:20:23.916028 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-01 00:20:23.916247 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-01 00:20:23.916266 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-01 00:20:23.916277 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-01 00:20:23.916288 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-01 00:20:23.916302 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-01 00:20:23.916325 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-01 00:20:23.916336 | orchestrator | ++ export PATH 2026-03-01 00:20:23.916352 | orchestrator | ++ '[' -n '' ']' 2026-03-01 00:20:23.916367 | orchestrator | ++ '[' -z '' ']' 2026-03-01 00:20:23.916386 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-01 00:20:23.916439 | orchestrator | ++ PS1='(venv) ' 2026-03-01 00:20:23.916452 | orchestrator | ++ export PS1 2026-03-01 00:20:23.916463 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-01 00:20:23.916486 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-01 00:20:23.916642 | orchestrator | ++ hash -r 2026-03-01 00:20:23.916821 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-01 00:20:24.965854 | orchestrator | 2026-03-01 00:20:24.965951 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-01 00:20:24.965963 | orchestrator | 2026-03-01 00:20:24.965971 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-01 00:20:25.460743 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:25.460853 | orchestrator | 2026-03-01 00:20:25.460871 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-01 00:20:26.309644 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:26.309750 | orchestrator | 2026-03-01 00:20:26.309767 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-01 00:20:26.309781 | orchestrator | 2026-03-01 00:20:26.309793 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:20:28.353176 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:28.353285 | orchestrator | 2026-03-01 00:20:28.353301 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-01 00:20:28.403472 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:28.403579 | orchestrator | 2026-03-01 00:20:28.403599 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-01 00:20:28.828042 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:28.828202 | orchestrator | 2026-03-01 00:20:28.828219 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-01 00:20:28.871548 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:20:28.871646 | orchestrator | 2026-03-01 00:20:28.871661 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-01 00:20:29.200453 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:29.200559 | orchestrator | 2026-03-01 00:20:29.200575 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-01 00:20:29.497007 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:29.497154 | orchestrator | 2026-03-01 00:20:29.497175 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-01 00:20:29.600410 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:20:29.600503 | orchestrator | 2026-03-01 00:20:29.600524 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-01 00:20:29.600545 | orchestrator | 2026-03-01 00:20:29.600565 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:20:31.189139 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:31.189247 | orchestrator | 2026-03-01 00:20:31.189265 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-01 00:20:31.283686 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-01 00:20:31.283776 | orchestrator | 2026-03-01 00:20:31.283790 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-01 00:20:31.335378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-01 00:20:31.335480 | orchestrator | 2026-03-01 00:20:31.335495 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-01 00:20:32.346257 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-01 00:20:32.346403 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-01 00:20:32.346421 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-01 00:20:32.346433 | orchestrator | 2026-03-01 00:20:32.346446 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-01 00:20:33.980859 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-01 00:20:33.980958 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-01 00:20:33.980975 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-01 00:20:33.980988 | orchestrator | 2026-03-01 00:20:33.981000 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-01 00:20:34.566318 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-01 00:20:34.566428 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:34.566446 | orchestrator | 2026-03-01 00:20:34.566461 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-01 00:20:35.169703 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-01 00:20:35.169810 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:35.169827 | orchestrator | 2026-03-01 00:20:35.169840 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-01 00:20:35.224674 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:20:35.224739 | orchestrator | 2026-03-01 00:20:35.224750 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-01 00:20:35.561649 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:35.561748 | orchestrator | 2026-03-01 00:20:35.561765 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-01 00:20:35.637474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-01 00:20:35.637579 | orchestrator | 2026-03-01 00:20:35.637612 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-01 00:20:36.609134 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:36.609236 | orchestrator | 2026-03-01 00:20:36.609253 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-01 00:20:37.326346 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:37.326461 | orchestrator | 2026-03-01 00:20:37.326494 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-01 00:20:52.154798 | orchestrator | changed: [testbed-manager] 2026-03-01 00:20:52.154903 | orchestrator | 2026-03-01 00:20:52.154943 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-01 00:20:52.210926 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:20:52.211022 | orchestrator | 2026-03-01 00:20:52.211037 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-01 00:20:52.211051 | orchestrator | 2026-03-01 00:20:52.211063 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:20:54.031300 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:54.031399 | orchestrator | 2026-03-01 00:20:54.031447 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-01 00:20:54.148272 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-01 00:20:54.148366 | orchestrator | 2026-03-01 00:20:54.148380 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-01 00:20:54.206324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-01 00:20:54.206418 | orchestrator | 2026-03-01 00:20:54.206436 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-01 00:20:56.552321 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:56.552409 | orchestrator | 2026-03-01 00:20:56.552419 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-01 00:20:56.596661 | orchestrator | ok: [testbed-manager] 2026-03-01 00:20:56.596759 | orchestrator | 2026-03-01 00:20:56.596775 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-01 00:20:56.716065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-01 00:20:56.716221 | orchestrator | 2026-03-01 00:20:56.716252 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-01 00:20:59.536952 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-01 00:20:59.537071 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-01 00:20:59.537126 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-01 00:20:59.537146 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-01 00:20:59.537165 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-01 00:20:59.537182 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-01 00:20:59.537201 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-01 00:20:59.537220 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-01 00:20:59.537240 | orchestrator | 2026-03-01 00:20:59.537261 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-01 00:21:00.184781 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:00.184883 | orchestrator | 2026-03-01 00:21:00.184901 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-01 00:21:00.822432 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:00.822559 | orchestrator | 2026-03-01 00:21:00.822588 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-01 00:21:00.902336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-01 00:21:00.902423 | orchestrator | 2026-03-01 00:21:00.902438 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-01 00:21:02.115430 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-01 00:21:02.115562 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-01 00:21:02.115578 | orchestrator | 2026-03-01 00:21:02.115592 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-01 00:21:02.740371 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:02.740458 | orchestrator | 2026-03-01 00:21:02.740474 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-01 00:21:02.789454 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:21:02.789558 | orchestrator | 2026-03-01 00:21:02.789583 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-01 00:21:02.863933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-01 00:21:02.864034 | orchestrator | 2026-03-01 00:21:02.864052 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-01 00:21:03.478629 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:03.478726 | orchestrator | 2026-03-01 00:21:03.478743 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-01 00:21:03.544705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-01 00:21:03.544824 | orchestrator | 2026-03-01 00:21:03.544840 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-01 00:21:04.884000 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-01 00:21:04.884165 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-01 00:21:04.884191 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:04.884213 | orchestrator | 2026-03-01 00:21:04.884233 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-01 00:21:05.503196 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:05.503296 | orchestrator | 2026-03-01 00:21:05.503314 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-01 00:21:05.561641 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:21:05.561742 | orchestrator | 2026-03-01 00:21:05.561774 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-01 00:21:05.665628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-01 00:21:05.665723 | orchestrator | 2026-03-01 00:21:05.665738 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-01 00:21:06.200785 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:06.200889 | orchestrator | 2026-03-01 00:21:06.200929 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-01 00:21:06.612376 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:06.612508 | orchestrator | 2026-03-01 00:21:06.612535 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-01 00:21:07.712349 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-01 00:21:07.712470 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-01 00:21:07.712486 | orchestrator | 2026-03-01 00:21:07.712499 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-01 00:21:08.307572 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:08.307675 | orchestrator | 2026-03-01 00:21:08.307691 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-01 00:21:08.636916 | orchestrator | ok: [testbed-manager] 2026-03-01 00:21:08.637001 | orchestrator | 2026-03-01 00:21:08.637012 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-01 00:21:08.952697 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:08.952800 | orchestrator | 2026-03-01 00:21:08.952818 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-01 00:21:09.001358 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:21:09.001456 | orchestrator | 2026-03-01 00:21:09.001471 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-01 00:21:09.068491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-01 00:21:09.068606 | orchestrator | 2026-03-01 00:21:09.068630 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-01 00:21:09.104075 | orchestrator | ok: [testbed-manager] 2026-03-01 00:21:09.104259 | orchestrator | 2026-03-01 00:21:09.104289 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-01 00:21:10.857312 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-01 00:21:10.857393 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-01 00:21:10.857402 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-01 00:21:10.857408 | orchestrator | 2026-03-01 00:21:10.857415 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-01 00:21:11.509193 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:11.509260 | orchestrator | 2026-03-01 00:21:11.509267 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-01 00:21:12.142993 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:12.143185 | orchestrator | 2026-03-01 00:21:12.143208 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-01 00:21:12.806849 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:12.806951 | orchestrator | 2026-03-01 00:21:12.806971 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-01 00:21:12.873808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-01 00:21:12.873903 | orchestrator | 2026-03-01 00:21:12.873919 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-01 00:21:12.912150 | orchestrator | ok: [testbed-manager] 2026-03-01 00:21:12.912230 | orchestrator | 2026-03-01 00:21:12.912241 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-01 00:21:13.541279 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-01 00:21:13.541377 | orchestrator | 2026-03-01 00:21:13.541393 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-01 00:21:13.616619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-01 00:21:13.616711 | orchestrator | 2026-03-01 00:21:13.616727 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-01 00:21:14.272480 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:14.272588 | orchestrator | 2026-03-01 00:21:14.272607 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-01 00:21:14.786870 | orchestrator | ok: [testbed-manager] 2026-03-01 00:21:14.786978 | orchestrator | 2026-03-01 00:21:14.786993 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-01 00:21:14.825751 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:21:14.825868 | orchestrator | 2026-03-01 00:21:14.825892 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-01 00:21:14.859742 | orchestrator | ok: [testbed-manager] 2026-03-01 00:21:14.859842 | orchestrator | 2026-03-01 00:21:14.859859 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-01 00:21:15.598357 | orchestrator | changed: [testbed-manager] 2026-03-01 00:21:15.598459 | orchestrator | 2026-03-01 00:21:15.598477 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-01 00:22:16.194168 | orchestrator | changed: [testbed-manager] 2026-03-01 00:22:16.194274 | orchestrator | 2026-03-01 00:22:16.194290 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-01 00:22:17.177998 | orchestrator | ok: [testbed-manager] 2026-03-01 00:22:17.178217 | orchestrator | 2026-03-01 00:22:17.178238 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-01 00:22:17.236003 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:22:17.236082 | orchestrator | 2026-03-01 00:22:17.236097 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-01 00:22:19.986892 | orchestrator | changed: [testbed-manager] 2026-03-01 00:22:19.986993 | orchestrator | 2026-03-01 00:22:19.987010 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-01 00:22:20.077039 | orchestrator | ok: [testbed-manager] 2026-03-01 00:22:20.077175 | orchestrator | 2026-03-01 00:22:20.077216 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-01 00:22:20.077231 | orchestrator | 2026-03-01 00:22:20.077243 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-01 00:22:20.123472 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:22:20.123582 | orchestrator | 2026-03-01 00:22:20.123595 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-01 00:23:20.166358 | orchestrator | Pausing for 60 seconds 2026-03-01 00:23:20.166462 | orchestrator | changed: [testbed-manager] 2026-03-01 00:23:20.166478 | orchestrator | 2026-03-01 00:23:20.166490 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-01 00:23:23.201084 | orchestrator | changed: [testbed-manager] 2026-03-01 00:23:23.201242 | orchestrator | 2026-03-01 00:23:23.201261 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-01 00:24:04.635877 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-01 00:24:04.635989 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-01 00:24:04.636006 | orchestrator | changed: [testbed-manager] 2026-03-01 00:24:04.636042 | orchestrator | 2026-03-01 00:24:04.636056 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-01 00:24:14.256437 | orchestrator | changed: [testbed-manager] 2026-03-01 00:24:14.256580 | orchestrator | 2026-03-01 00:24:14.256609 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-01 00:24:14.341378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-01 00:24:14.341479 | orchestrator | 2026-03-01 00:24:14.341503 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-01 00:24:14.341524 | orchestrator | 2026-03-01 00:24:14.341542 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-01 00:24:14.379588 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:24:14.379677 | orchestrator | 2026-03-01 00:24:14.379690 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-01 00:24:14.447631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-01 00:24:14.447733 | orchestrator | 2026-03-01 00:24:14.447751 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-01 00:24:15.119274 | orchestrator | changed: [testbed-manager] 2026-03-01 00:24:15.119388 | orchestrator | 2026-03-01 00:24:15.119406 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-01 00:24:18.125928 | orchestrator | ok: [testbed-manager] 2026-03-01 00:24:18.126063 | orchestrator | 2026-03-01 00:24:18.126084 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-01 00:24:18.206639 | orchestrator | ok: [testbed-manager] => { 2026-03-01 00:24:18.206734 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-01 00:24:18.206750 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-01 00:24:18.206762 | orchestrator | "Checking running containers against expected versions...", 2026-03-01 00:24:18.206775 | orchestrator | "", 2026-03-01 00:24:18.206789 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-01 00:24:18.206800 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-01 00:24:18.206811 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.206823 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-01 00:24:18.206834 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.206845 | orchestrator | "", 2026-03-01 00:24:18.206857 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-01 00:24:18.206868 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-01 00:24:18.206879 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.206890 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-01 00:24:18.206900 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.206911 | orchestrator | "", 2026-03-01 00:24:18.206922 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-01 00:24:18.206933 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-01 00:24:18.206944 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.206955 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-01 00:24:18.206966 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.206977 | orchestrator | "", 2026-03-01 00:24:18.206988 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-01 00:24:18.206999 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-01 00:24:18.207011 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207022 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-01 00:24:18.207104 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207120 | orchestrator | "", 2026-03-01 00:24:18.207132 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-01 00:24:18.207143 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-01 00:24:18.207182 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207226 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-01 00:24:18.207242 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207255 | orchestrator | "", 2026-03-01 00:24:18.207268 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-01 00:24:18.207280 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207293 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207306 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207319 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207332 | orchestrator | "", 2026-03-01 00:24:18.207345 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-01 00:24:18.207358 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-01 00:24:18.207371 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207384 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-01 00:24:18.207394 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207405 | orchestrator | "", 2026-03-01 00:24:18.207416 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-01 00:24:18.207427 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-01 00:24:18.207438 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207449 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-01 00:24:18.207460 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207470 | orchestrator | "", 2026-03-01 00:24:18.207490 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-01 00:24:18.207501 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-01 00:24:18.207517 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207528 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-01 00:24:18.207539 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207550 | orchestrator | "", 2026-03-01 00:24:18.207561 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-01 00:24:18.207572 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-01 00:24:18.207582 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207593 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-01 00:24:18.207604 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207615 | orchestrator | "", 2026-03-01 00:24:18.207625 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-01 00:24:18.207636 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207647 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207657 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207668 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207678 | orchestrator | "", 2026-03-01 00:24:18.207689 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-01 00:24:18.207700 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207711 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207722 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207732 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207743 | orchestrator | "", 2026-03-01 00:24:18.207753 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-01 00:24:18.207764 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207775 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207785 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207796 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207807 | orchestrator | "", 2026-03-01 00:24:18.207817 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-01 00:24:18.207828 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207838 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207849 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207860 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207877 | orchestrator | "", 2026-03-01 00:24:18.207887 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-01 00:24:18.207916 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207927 | orchestrator | " Enabled: true", 2026-03-01 00:24:18.207938 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-01 00:24:18.207949 | orchestrator | " Status: ✅ MATCH", 2026-03-01 00:24:18.207960 | orchestrator | "", 2026-03-01 00:24:18.207971 | orchestrator | "=== Summary ===", 2026-03-01 00:24:18.207982 | orchestrator | "Errors (version mismatches): 0", 2026-03-01 00:24:18.207993 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-01 00:24:18.208004 | orchestrator | "", 2026-03-01 00:24:18.208015 | orchestrator | "✅ All running containers match expected versions!" 2026-03-01 00:24:18.208026 | orchestrator | ] 2026-03-01 00:24:18.208037 | orchestrator | } 2026-03-01 00:24:18.208049 | orchestrator | 2026-03-01 00:24:18.208060 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-01 00:24:18.265664 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:24:18.265742 | orchestrator | 2026-03-01 00:24:18.265754 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:24:18.265765 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-01 00:24:18.265774 | orchestrator | 2026-03-01 00:24:18.363949 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-01 00:24:18.364043 | orchestrator | + deactivate 2026-03-01 00:24:18.364059 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-01 00:24:18.364074 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-01 00:24:18.364086 | orchestrator | + export PATH 2026-03-01 00:24:18.364097 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-01 00:24:18.364110 | orchestrator | + '[' -n '' ']' 2026-03-01 00:24:18.364121 | orchestrator | + hash -r 2026-03-01 00:24:18.364132 | orchestrator | + '[' -n '' ']' 2026-03-01 00:24:18.364143 | orchestrator | + unset VIRTUAL_ENV 2026-03-01 00:24:18.364153 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-01 00:24:18.364165 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-01 00:24:18.364176 | orchestrator | + unset -f deactivate 2026-03-01 00:24:18.364187 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-01 00:24:18.373136 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-01 00:24:18.373249 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-01 00:24:18.373266 | orchestrator | + local max_attempts=60 2026-03-01 00:24:18.373280 | orchestrator | + local name=ceph-ansible 2026-03-01 00:24:18.373292 | orchestrator | + local attempt_num=1 2026-03-01 00:24:18.374264 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:24:18.400306 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:24:18.400409 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-01 00:24:18.400430 | orchestrator | + local max_attempts=60 2026-03-01 00:24:18.400447 | orchestrator | + local name=kolla-ansible 2026-03-01 00:24:18.400463 | orchestrator | + local attempt_num=1 2026-03-01 00:24:18.400799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-01 00:24:18.426844 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:24:18.426958 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-01 00:24:18.426975 | orchestrator | + local max_attempts=60 2026-03-01 00:24:18.426988 | orchestrator | + local name=osism-ansible 2026-03-01 00:24:18.426999 | orchestrator | + local attempt_num=1 2026-03-01 00:24:18.427309 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-01 00:24:18.451671 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:24:18.451788 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-01 00:24:18.451814 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-01 00:24:19.128642 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-01 00:24:19.280871 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-01 00:24:19.280989 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281004 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281015 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-01 00:24:19.281027 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-01 00:24:19.281037 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281046 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281056 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-03-01 00:24:19.281081 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281092 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-01 00:24:19.281102 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281111 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-01 00:24:19.281121 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281130 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-01 00:24:19.281140 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.281150 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-01 00:24:19.286358 | orchestrator | ++ semver latest 7.0.0 2026-03-01 00:24:19.326086 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-01 00:24:19.326184 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-01 00:24:19.326246 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-01 00:24:19.328452 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-01 00:24:31.385995 | orchestrator | 2026-03-01 00:24:31 | INFO  | Prepare task for execution of resolvconf. 2026-03-01 00:24:31.588334 | orchestrator | 2026-03-01 00:24:31 | INFO  | Task 4b5f9d8f-6a8c-46e7-824f-061b9734d218 (resolvconf) was prepared for execution. 2026-03-01 00:24:31.588433 | orchestrator | 2026-03-01 00:24:31 | INFO  | It takes a moment until task 4b5f9d8f-6a8c-46e7-824f-061b9734d218 (resolvconf) has been started and output is visible here. 2026-03-01 00:24:44.654294 | orchestrator | 2026-03-01 00:24:44.654416 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-01 00:24:44.654431 | orchestrator | 2026-03-01 00:24:44.654441 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:24:44.654451 | orchestrator | Sunday 01 March 2026 00:24:35 +0000 (0:00:00.141) 0:00:00.141 ********** 2026-03-01 00:24:44.654460 | orchestrator | ok: [testbed-manager] 2026-03-01 00:24:44.654470 | orchestrator | 2026-03-01 00:24:44.654479 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-01 00:24:44.654489 | orchestrator | Sunday 01 March 2026 00:24:39 +0000 (0:00:03.685) 0:00:03.826 ********** 2026-03-01 00:24:44.654498 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:24:44.654508 | orchestrator | 2026-03-01 00:24:44.654517 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-01 00:24:44.654526 | orchestrator | Sunday 01 March 2026 00:24:39 +0000 (0:00:00.071) 0:00:03.898 ********** 2026-03-01 00:24:44.654535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-01 00:24:44.654545 | orchestrator | 2026-03-01 00:24:44.654555 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-01 00:24:44.654566 | orchestrator | Sunday 01 March 2026 00:24:39 +0000 (0:00:00.077) 0:00:03.976 ********** 2026-03-01 00:24:44.654593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-01 00:24:44.654608 | orchestrator | 2026-03-01 00:24:44.654623 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-01 00:24:44.654638 | orchestrator | Sunday 01 March 2026 00:24:39 +0000 (0:00:00.072) 0:00:04.049 ********** 2026-03-01 00:24:44.654653 | orchestrator | ok: [testbed-manager] 2026-03-01 00:24:44.654668 | orchestrator | 2026-03-01 00:24:44.654683 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-01 00:24:44.654700 | orchestrator | Sunday 01 March 2026 00:24:40 +0000 (0:00:01.077) 0:00:05.127 ********** 2026-03-01 00:24:44.654711 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:24:44.654720 | orchestrator | 2026-03-01 00:24:44.654729 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-01 00:24:44.654738 | orchestrator | Sunday 01 March 2026 00:24:40 +0000 (0:00:00.057) 0:00:05.184 ********** 2026-03-01 00:24:44.654747 | orchestrator | ok: [testbed-manager] 2026-03-01 00:24:44.654756 | orchestrator | 2026-03-01 00:24:44.654765 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-01 00:24:44.654773 | orchestrator | Sunday 01 March 2026 00:24:40 +0000 (0:00:00.473) 0:00:05.657 ********** 2026-03-01 00:24:44.654782 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:24:44.654791 | orchestrator | 2026-03-01 00:24:44.654800 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-01 00:24:44.654812 | orchestrator | Sunday 01 March 2026 00:24:41 +0000 (0:00:00.063) 0:00:05.721 ********** 2026-03-01 00:24:44.654822 | orchestrator | changed: [testbed-manager] 2026-03-01 00:24:44.654832 | orchestrator | 2026-03-01 00:24:44.654843 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-01 00:24:44.654853 | orchestrator | Sunday 01 March 2026 00:24:41 +0000 (0:00:00.456) 0:00:06.177 ********** 2026-03-01 00:24:44.654864 | orchestrator | changed: [testbed-manager] 2026-03-01 00:24:44.654874 | orchestrator | 2026-03-01 00:24:44.654885 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-01 00:24:44.654895 | orchestrator | Sunday 01 March 2026 00:24:42 +0000 (0:00:00.979) 0:00:07.157 ********** 2026-03-01 00:24:44.654905 | orchestrator | ok: [testbed-manager] 2026-03-01 00:24:44.654916 | orchestrator | 2026-03-01 00:24:44.654943 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-01 00:24:44.654952 | orchestrator | Sunday 01 March 2026 00:24:43 +0000 (0:00:00.853) 0:00:08.010 ********** 2026-03-01 00:24:44.654961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-01 00:24:44.654970 | orchestrator | 2026-03-01 00:24:44.654979 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-01 00:24:44.654987 | orchestrator | Sunday 01 March 2026 00:24:43 +0000 (0:00:00.076) 0:00:08.087 ********** 2026-03-01 00:24:44.654996 | orchestrator | changed: [testbed-manager] 2026-03-01 00:24:44.655005 | orchestrator | 2026-03-01 00:24:44.655013 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:24:44.655024 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 00:24:44.655032 | orchestrator | 2026-03-01 00:24:44.655041 | orchestrator | 2026-03-01 00:24:44.655050 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:24:44.655059 | orchestrator | Sunday 01 March 2026 00:24:44 +0000 (0:00:01.092) 0:00:09.180 ********** 2026-03-01 00:24:44.655068 | orchestrator | =============================================================================== 2026-03-01 00:24:44.655077 | orchestrator | Gathering Facts --------------------------------------------------------- 3.69s 2026-03-01 00:24:44.655085 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2026-03-01 00:24:44.655094 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2026-03-01 00:24:44.655103 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.98s 2026-03-01 00:24:44.655112 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.85s 2026-03-01 00:24:44.655121 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2026-03-01 00:24:44.655147 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.46s 2026-03-01 00:24:44.655157 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-01 00:24:44.655165 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-01 00:24:44.655174 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-01 00:24:44.655183 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-01 00:24:44.655192 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2026-03-01 00:24:44.655201 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-01 00:24:44.842398 | orchestrator | + osism apply sshconfig 2026-03-01 00:24:56.699653 | orchestrator | 2026-03-01 00:24:56 | INFO  | Prepare task for execution of sshconfig. 2026-03-01 00:24:56.779761 | orchestrator | 2026-03-01 00:24:56 | INFO  | Task ae39e94f-b1da-45be-b3c4-3ce413591bd6 (sshconfig) was prepared for execution. 2026-03-01 00:24:56.779949 | orchestrator | 2026-03-01 00:24:56 | INFO  | It takes a moment until task ae39e94f-b1da-45be-b3c4-3ce413591bd6 (sshconfig) has been started and output is visible here. 2026-03-01 00:25:07.843894 | orchestrator | 2026-03-01 00:25:07.843969 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-01 00:25:07.843976 | orchestrator | 2026-03-01 00:25:07.843981 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-01 00:25:07.843986 | orchestrator | Sunday 01 March 2026 00:25:00 +0000 (0:00:00.119) 0:00:00.119 ********** 2026-03-01 00:25:07.843990 | orchestrator | ok: [testbed-manager] 2026-03-01 00:25:07.843996 | orchestrator | 2026-03-01 00:25:07.844000 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-01 00:25:07.844004 | orchestrator | Sunday 01 March 2026 00:25:01 +0000 (0:00:00.459) 0:00:00.579 ********** 2026-03-01 00:25:07.844027 | orchestrator | changed: [testbed-manager] 2026-03-01 00:25:07.844033 | orchestrator | 2026-03-01 00:25:07.844037 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-01 00:25:07.844041 | orchestrator | Sunday 01 March 2026 00:25:01 +0000 (0:00:00.468) 0:00:01.048 ********** 2026-03-01 00:25:07.844045 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-01 00:25:07.844050 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-01 00:25:07.844054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-01 00:25:07.844058 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-01 00:25:07.844062 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-01 00:25:07.844066 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-01 00:25:07.844070 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-01 00:25:07.844073 | orchestrator | 2026-03-01 00:25:07.844077 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-01 00:25:07.844081 | orchestrator | Sunday 01 March 2026 00:25:06 +0000 (0:00:05.378) 0:00:06.427 ********** 2026-03-01 00:25:07.844085 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:25:07.844089 | orchestrator | 2026-03-01 00:25:07.844092 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-01 00:25:07.844096 | orchestrator | Sunday 01 March 2026 00:25:07 +0000 (0:00:00.075) 0:00:06.502 ********** 2026-03-01 00:25:07.844100 | orchestrator | changed: [testbed-manager] 2026-03-01 00:25:07.844104 | orchestrator | 2026-03-01 00:25:07.844108 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:25:07.844114 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:25:07.844119 | orchestrator | 2026-03-01 00:25:07.844123 | orchestrator | 2026-03-01 00:25:07.844127 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:25:07.844131 | orchestrator | Sunday 01 March 2026 00:25:07 +0000 (0:00:00.570) 0:00:07.072 ********** 2026-03-01 00:25:07.844135 | orchestrator | =============================================================================== 2026-03-01 00:25:07.844139 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.38s 2026-03-01 00:25:07.844143 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-03-01 00:25:07.844148 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2026-03-01 00:25:07.844153 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.46s 2026-03-01 00:25:07.844159 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-01 00:25:08.136340 | orchestrator | + osism apply known-hosts 2026-03-01 00:25:20.236990 | orchestrator | 2026-03-01 00:25:20 | INFO  | Prepare task for execution of known-hosts. 2026-03-01 00:25:20.305818 | orchestrator | 2026-03-01 00:25:20 | INFO  | Task dae8f173-4dad-4ffb-96cc-d3b8d9265ffc (known-hosts) was prepared for execution. 2026-03-01 00:25:20.305943 | orchestrator | 2026-03-01 00:25:20 | INFO  | It takes a moment until task dae8f173-4dad-4ffb-96cc-d3b8d9265ffc (known-hosts) has been started and output is visible here. 2026-03-01 00:25:35.590632 | orchestrator | 2026-03-01 00:25:35.590721 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-01 00:25:35.590733 | orchestrator | 2026-03-01 00:25:35.590743 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-01 00:25:35.590751 | orchestrator | Sunday 01 March 2026 00:25:24 +0000 (0:00:00.118) 0:00:00.118 ********** 2026-03-01 00:25:35.590758 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-01 00:25:35.590765 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-01 00:25:35.590772 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-01 00:25:35.590791 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-01 00:25:35.590798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-01 00:25:35.590804 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-01 00:25:35.590810 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-01 00:25:35.590817 | orchestrator | 2026-03-01 00:25:35.590823 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-01 00:25:35.590831 | orchestrator | Sunday 01 March 2026 00:25:29 +0000 (0:00:05.622) 0:00:05.741 ********** 2026-03-01 00:25:35.590843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-01 00:25:35.590853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-01 00:25:35.590859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-01 00:25:35.590865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-01 00:25:35.590872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-01 00:25:35.590878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-01 00:25:35.590884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-01 00:25:35.590890 | orchestrator | 2026-03-01 00:25:35.590897 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:35.590903 | orchestrator | Sunday 01 March 2026 00:25:29 +0000 (0:00:00.155) 0:00:05.896 ********** 2026-03-01 00:25:35.590910 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKYYBCKZvVEQV4+Jg3n1R3IO0RNWuvoYaIvTnjAeo5VyS5IrkuMEkysddNvn/e4wpNuM3my5G6KPel5VKZYi5Ms=) 2026-03-01 00:25:35.590921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf97awZrjkVpr8XFDEcWa3fGFWd5InbqDYBTCT0ZL6OS0kROQkEu5aKiLZZHcP6s6ccXOmPWiHeoVAJ0Y2Cl3xWnKLVFfDpYJXksJFJwyNODaKZa1LIHIUNMk49D6gC9r1KrFQg35EcWsIGXowSJ3kx5iUbtAHyspFeq/u3H8bwh8fsYJWaARMutN2Ax3R64s1YpDsO3BCoIf4W5PWVUAdOMDMAPIFSimiWCw/JWNLuvmnQEBdtrTSUqtQyBK8+l2rr2AwyGeLUQhs5/RdjfpYGhq1rL5ZWgkeWe8skR4nmlJpAQehtKvKtt9C7Him5WYashuiNz38EeCgm2tw7nssU6nsh5Oy0tYvwQ8V+QFR/P5T7mVQWbNdzb2XzTdqDUExSIOIX5fyDoxduHNW3LM5OddDNwNbnRsWrAEAzuomRdtPUwVkkqAGgvqbyfY5l1iW0b89tuWNX3+gqV+1ZpLNlBeh03IMazy6MbvTAoaEnn5r8Lm6aoqqNNUbPwn0czk=) 2026-03-01 00:25:35.590929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM+crL/H9+BC0AMJc9sR9q2bjkkJaBhfRMYQ89mc2q5B) 2026-03-01 00:25:35.590937 | orchestrator | 2026-03-01 00:25:35.590944 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:35.590950 | orchestrator | Sunday 01 March 2026 00:25:31 +0000 (0:00:01.165) 0:00:07.062 ********** 2026-03-01 00:25:35.590956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKOWZ/u/UUzj7sE2MN3C2I9SVJ0jSZnlDYWjD8sCPI83ok4qKEgJUQY213Tx3IGPSv8PuT7bY0Jb44ASDmZ8Eo=) 2026-03-01 00:25:35.590981 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPuUr4brSMiGb9TsVAioU4P9tEjCX8Vecb0ow1ApCQ+KT0RTP4W9/wTheIq+YKU2VgNwfLRnuHigUiOwvG0kye8SbpL1RkJ3SkNLzHWwp/GGDAU2URTCdoKxng+DL5iuKUNOpuL8MA+y2F55pS2gWnfxkmbMMLU6Ic8e6DknJANqlph4TX4n8pFlujfdUux5tvUFfZ1hH8L93H5TEi6Ru2PPKsDdf8IYWfCmu5HJMHFJQkc/CCYTcN2MUoDTRa+wdv5eBWrlKlhAQvdKSaF5CYWeJKYup5yUCj5k5rUTz8Fri/tYNDe6nbE6pCh9KzLzUnO8T1el2OzLjDAbjD8LbSiyYZQ9R8bwCsRLXeRwpZkavsxCXmFyFM8CS3AHmQxkjCmts9wp1MUMqFaJBN0r/X/Hlv5mu1d/efdPonaesME9gU3wkGzrv6mFAiXpdnjF/EmeL7nNvgLu/wRaKhupMlyWYru1Tz2NmRxDsSYqmyAqHsEKOxC24Wi96uBfJBsvU=) 2026-03-01 00:25:35.590993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPl2JAEHwgVV2scIPOgiY53+aPoaN4raH+KtPIRhM4wf) 2026-03-01 00:25:35.591000 | orchestrator | 2026-03-01 00:25:35.591006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:35.591012 | orchestrator | Sunday 01 March 2026 00:25:32 +0000 (0:00:01.025) 0:00:08.088 ********** 2026-03-01 00:25:35.591019 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILE8du+zrCjdcvcnzdKuU2SS2d8T4iZYVc/+D/NwRNHR) 2026-03-01 00:25:35.591025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7FxCvWxXQg5gP4Ttq9dPa5DhAvHS2gc6cBBRN9UTl4Ozz52hPXcPblgFDwqeBQ2hHaNUpbYvYFgfUTDsMOwJS7w+OBjri9JhV3JZiP3/a0om1P6+VG8Ep1k/8HxvFzBP9o6ky1VrnibaD10IMAACBi/0i2yscQnO6CqHqdX8A6XMfWKAPA/waZhRxYfM1Q3wx8wt7N3U5PogUT6t9yctxV4Oc/qqHbdsUquSSDC22Gn7S+kO7HxPqYV/SGMNPGV9rOJTIk530bLCfXNBjPg3n9UYtEI9OqKjJ8elLwlBWG3SpTVQtQIcx2GZeDr2dV0yel1hXtIRRIrgM8E5Gha4JdZlogOzHZGVuTaQz0Oi1QNvpKmIFoO+QUDLc5x101NrgIhPnN47gjA6DxnhyvzcMOwNsc0eJs8JQWacg11QnQy74zLwrDvb2y8UbKPwG29aqhukqljOgLgq0WsHWsLOXkSutOieFrfBSOmfNY7ROJRTJfrSX+seNQmm3Zv1ion8=) 2026-03-01 00:25:35.591070 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD76XCbhHbaz5dZw0Y30xtZAiSRcrAkaO5zt5o0Y9BlwBo3PYQaRdaotMUEY5+m4iJpKvRyqTFmIbrat5xvMYoY=) 2026-03-01 00:25:35.591077 | orchestrator | 2026-03-01 00:25:35.591084 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:35.591090 | orchestrator | Sunday 01 March 2026 00:25:33 +0000 (0:00:01.009) 0:00:09.098 ********** 2026-03-01 00:25:35.591098 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINgB/47m6odckUAp6xcYBcrJU3gD3vQIU+QHTLHMIGDu) 2026-03-01 00:25:35.591105 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKFsLKigvx7ALAzH9/2SWfrnCu95nbB0Ay3MT4RQ16KKjwxOaGGu3WGEQLES4nesc+rOBhqYOpVHf+s55Swwhcdctw3H9OBSrO6SuLUaoex4vowg8ilS8+BDQfUxWNlogmDBjl4JtLg5qBDRVh1SK5ohM0CqCU0nfGkLTjpdH8NF0yM6vdXXhijBxSC85RRDwdKvhPQnU1uHSsQsAqNq5qfGrXqwBcVETJnzR4zBVisQ8DFtruEbP1YaUfWPuwlNN3Gw3KTEude0yK6mQGF+QAvCWAbroXJy9coEGBCmeBJrw/6KHWqN4n5E9h6yPQybOm0f1YQK4G2v5A9jJrhqIWCKG+JoyqMZtkcxCr6o3P+rne+Apd10pyZoEOM0O+hW0eY8DaS7V7f0FNvJSjWS5YRNUR633CLQeplIri6otJMN3JO7ue+qakDNZ1jy+0swYJ5Qj7tPbtb2GnKUzbO7ugH+KXlLA8UsJ6ouX9wBnHyRWVdQaovkwMpnK0JMoofek=) 2026-03-01 00:25:35.591112 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDPovmG9Tz59zcnV97IYX+IY2FVYMPLcyDDnNY867e399jXwneOVkBlSQTdrZtOBoNU82C7Dz2ltsSNWxF00RU=) 2026-03-01 00:25:35.591118 | orchestrator | 2026-03-01 00:25:35.591124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:35.591131 | orchestrator | Sunday 01 March 2026 00:25:34 +0000 (0:00:01.049) 0:00:10.148 ********** 2026-03-01 00:25:35.591137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQxHx+QFymtWN98tkRL1N1FOWSXG3WhBGYQO4PVTMUjCfKhM/rHTcube2NLbA7Upd7khU2HoHrL3Ts+3bohihdoadzN8FDz6AjMxqXqrZogY4nbFYqa4Qvkhge8r/SJHb3G4Q3QgTSdCBNPLaKKX0W9WpRsnJPcuod0r4PNgur9USAzwUjBcQluN9MhwD0loVOek8EE7aUTVP+WpGNARZSipQ9w6wmQSslgWRkGtOT9ASEtdhft7Yw5BrzSkCpZkZ2JixQa6WhdKzKHM5fwadE1zdPT3n0MqjEjuEkSjwmw1zhOt6lwwIRxyI66ZipXWXbEXB0Ltqz8+sDRXzvQImDPUfHZ4o/5IWXvn3JTP8fKo+IvbzNgUJXMhIPSl8WpOp21lDCl0e69HOE+Yo/qAsQ8OT/8XmxAQZyMQ/6XL+CsUfsXUgCwGrXgiXo2q6nF1lNwTA34NktwOVaGYg4oyUtnK6+5pyYbIvcEZI+yRLw70tTXJSJ/iwtS9/+jI1vPWc=) 2026-03-01 00:25:35.591148 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA/52Y+9hIohSCchzEsz10Za+f5S0d0ax5/rL7TbgYBnL9ojxBCRqQVLVqujwv3GGIIxIYGqzMDfs1ihueeHSzA=) 2026-03-01 00:25:35.591154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPfLjR4EeWzLdsnwtreQSWf5BSIF62ztUpPEOP/u0lY) 2026-03-01 00:25:35.591160 | orchestrator | 2026-03-01 00:25:35.591167 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:35.591173 | orchestrator | Sunday 01 March 2026 00:25:35 +0000 (0:00:01.074) 0:00:11.223 ********** 2026-03-01 00:25:35.591183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI4BCwjG+snjfEAjTSROmR1ab9BpTjrgDRXfT3kaLyzyk/19cjBZp6Tc7Yfm96tVre+K4DdUeXwCC1oXE3gIzZk=) 2026-03-01 00:25:46.542269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPV+BxtfOCWkkzSQ5FVXzoOBF0XrhK4KBHwnnrP4Cctn) 2026-03-01 00:25:46.542385 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYaSMYWZgkHp0n3zVM+YubgbhKbOxSpJfkuxgEKBd6vXp5TpkiEZUEIOtXJq+6dFfNWxyIZ1soJtVmEuS1cMup+zEOqV9sJQeENVvrDMrq0h4e7IUAx4kriKdcF0fl+G8wgfzV+IY8ZChMtBqy21LAg/lfSOfyOSP4ERRAxascGNlDCSO4PNtjr19gjIFLAxtOirft1NKTbo7Aegkeu9e7R3yJXGz0ghAlvRZSA6J7Nn5dWs4sklxjr62UNnZ2AZY+I61gVnmYQN1B88hxwSt4UpMIGugw0A3crxEnoiPTIvmNwbZVk2itTMVPhvtwdVEMmAh9RyA4LXW0SY3mzGa4hW0FGxIi/X2gXK4HT/2/F1Sx+fDWXOSLp3qiKrEt51k0M5hK0NvZ+Jvf33Zr4MPdsU6yFCGQQc+Ov+g4prnp1au5AdtVLBK+8hmLPudMTw8NeBXSCLU1oegwpmkTi1EliSbQDbaGSuOSALppHeOpQodFQ6CkH+LUnjRqIctvbps=) 2026-03-01 00:25:46.542405 | orchestrator | 2026-03-01 00:25:46.542418 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:46.542431 | orchestrator | Sunday 01 March 2026 00:25:36 +0000 (0:00:01.052) 0:00:12.275 ********** 2026-03-01 00:25:46.542443 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8ZVv+jshrUhQgRWzcvhMhbsqcmlJZWSSnNDt+GsOmpqs784ylsqaAFb44AaDBdhrLLOHPIHBUq6eMqnAamosgtrg1Yy0M9EcfqwJonyYvyHZO9kZeay2hjBOXin56HntMvQDpVN9mXxaGUPRY3rqHWckCeMneOVcga3mhoD5UShzyKHxGb0OlHPdht13zCmx9s0PPRGVQLoSRcd2fMzYD6r1vB8fIsIM86e1FfLuzGkPY9VPHz5g0X3UkZu8J7oYera1UYnPsafetRKnejLSgg03m2db8AEgLqUtAV/c0WcuxGyT/xfpB1rLG5/AenxUcfLXfXv0gjWYKH2faS5gxtZmV5gcWbJMkjy29yTdxqrY31sJcdSaa39hicmISl3p0aeUbZ35nfBk7/12mzBCapS1Dxb5ZAaYUtl4mi7SNh1bahSMZikoHXJhnULRYZMxBxKpOFShrUwpPrGSF6nGrdw1FVgub5LZq6WMvMNh38cPuxTnQ/SPas4s4r1eMO7c=) 2026-03-01 00:25:46.542455 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNPTSzKGjNrZlcB9b2cZHqw3lmZMHBV5JDcMnaYrsp41UcFSbCBzv5lOdUKFuIlohTty6KziChSu5ekgFcGgHNw=) 2026-03-01 00:25:46.542469 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIyao3uNHauGBjgSOwcLmOMwmSyOKoE//HJG8jfpw1ai) 2026-03-01 00:25:46.542480 | orchestrator | 2026-03-01 00:25:46.542491 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-01 00:25:46.542504 | orchestrator | Sunday 01 March 2026 00:25:37 +0000 (0:00:01.048) 0:00:13.324 ********** 2026-03-01 00:25:46.542515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-01 00:25:46.542527 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-01 00:25:46.542538 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-01 00:25:46.542549 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-01 00:25:46.542560 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-01 00:25:46.542589 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-01 00:25:46.542600 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-01 00:25:46.542632 | orchestrator | 2026-03-01 00:25:46.542646 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-01 00:25:46.542661 | orchestrator | Sunday 01 March 2026 00:25:42 +0000 (0:00:05.323) 0:00:18.647 ********** 2026-03-01 00:25:46.542675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-01 00:25:46.542691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-01 00:25:46.542703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-01 00:25:46.542715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-01 00:25:46.542727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-01 00:25:46.542739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-01 00:25:46.542752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-01 00:25:46.542765 | orchestrator | 2026-03-01 00:25:46.542797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:46.542810 | orchestrator | Sunday 01 March 2026 00:25:42 +0000 (0:00:00.190) 0:00:18.838 ********** 2026-03-01 00:25:46.542826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf97awZrjkVpr8XFDEcWa3fGFWd5InbqDYBTCT0ZL6OS0kROQkEu5aKiLZZHcP6s6ccXOmPWiHeoVAJ0Y2Cl3xWnKLVFfDpYJXksJFJwyNODaKZa1LIHIUNMk49D6gC9r1KrFQg35EcWsIGXowSJ3kx5iUbtAHyspFeq/u3H8bwh8fsYJWaARMutN2Ax3R64s1YpDsO3BCoIf4W5PWVUAdOMDMAPIFSimiWCw/JWNLuvmnQEBdtrTSUqtQyBK8+l2rr2AwyGeLUQhs5/RdjfpYGhq1rL5ZWgkeWe8skR4nmlJpAQehtKvKtt9C7Him5WYashuiNz38EeCgm2tw7nssU6nsh5Oy0tYvwQ8V+QFR/P5T7mVQWbNdzb2XzTdqDUExSIOIX5fyDoxduHNW3LM5OddDNwNbnRsWrAEAzuomRdtPUwVkkqAGgvqbyfY5l1iW0b89tuWNX3+gqV+1ZpLNlBeh03IMazy6MbvTAoaEnn5r8Lm6aoqqNNUbPwn0czk=) 2026-03-01 00:25:46.542839 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKYYBCKZvVEQV4+Jg3n1R3IO0RNWuvoYaIvTnjAeo5VyS5IrkuMEkysddNvn/e4wpNuM3my5G6KPel5VKZYi5Ms=) 2026-03-01 00:25:46.542853 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM+crL/H9+BC0AMJc9sR9q2bjkkJaBhfRMYQ89mc2q5B) 2026-03-01 00:25:46.542868 | orchestrator | 2026-03-01 00:25:46.542888 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:46.542908 | orchestrator | Sunday 01 March 2026 00:25:43 +0000 (0:00:01.032) 0:00:19.870 ********** 2026-03-01 00:25:46.542927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKOWZ/u/UUzj7sE2MN3C2I9SVJ0jSZnlDYWjD8sCPI83ok4qKEgJUQY213Tx3IGPSv8PuT7bY0Jb44ASDmZ8Eo=) 2026-03-01 00:25:46.542948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPuUr4brSMiGb9TsVAioU4P9tEjCX8Vecb0ow1ApCQ+KT0RTP4W9/wTheIq+YKU2VgNwfLRnuHigUiOwvG0kye8SbpL1RkJ3SkNLzHWwp/GGDAU2URTCdoKxng+DL5iuKUNOpuL8MA+y2F55pS2gWnfxkmbMMLU6Ic8e6DknJANqlph4TX4n8pFlujfdUux5tvUFfZ1hH8L93H5TEi6Ru2PPKsDdf8IYWfCmu5HJMHFJQkc/CCYTcN2MUoDTRa+wdv5eBWrlKlhAQvdKSaF5CYWeJKYup5yUCj5k5rUTz8Fri/tYNDe6nbE6pCh9KzLzUnO8T1el2OzLjDAbjD8LbSiyYZQ9R8bwCsRLXeRwpZkavsxCXmFyFM8CS3AHmQxkjCmts9wp1MUMqFaJBN0r/X/Hlv5mu1d/efdPonaesME9gU3wkGzrv6mFAiXpdnjF/EmeL7nNvgLu/wRaKhupMlyWYru1Tz2NmRxDsSYqmyAqHsEKOxC24Wi96uBfJBsvU=) 2026-03-01 00:25:46.542980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPl2JAEHwgVV2scIPOgiY53+aPoaN4raH+KtPIRhM4wf) 2026-03-01 00:25:46.542991 | orchestrator | 2026-03-01 00:25:46.543002 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:46.543013 | orchestrator | Sunday 01 March 2026 00:25:44 +0000 (0:00:01.003) 0:00:20.874 ********** 2026-03-01 00:25:46.543024 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD76XCbhHbaz5dZw0Y30xtZAiSRcrAkaO5zt5o0Y9BlwBo3PYQaRdaotMUEY5+m4iJpKvRyqTFmIbrat5xvMYoY=) 2026-03-01 00:25:46.543035 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7FxCvWxXQg5gP4Ttq9dPa5DhAvHS2gc6cBBRN9UTl4Ozz52hPXcPblgFDwqeBQ2hHaNUpbYvYFgfUTDsMOwJS7w+OBjri9JhV3JZiP3/a0om1P6+VG8Ep1k/8HxvFzBP9o6ky1VrnibaD10IMAACBi/0i2yscQnO6CqHqdX8A6XMfWKAPA/waZhRxYfM1Q3wx8wt7N3U5PogUT6t9yctxV4Oc/qqHbdsUquSSDC22Gn7S+kO7HxPqYV/SGMNPGV9rOJTIk530bLCfXNBjPg3n9UYtEI9OqKjJ8elLwlBWG3SpTVQtQIcx2GZeDr2dV0yel1hXtIRRIrgM8E5Gha4JdZlogOzHZGVuTaQz0Oi1QNvpKmIFoO+QUDLc5x101NrgIhPnN47gjA6DxnhyvzcMOwNsc0eJs8JQWacg11QnQy74zLwrDvb2y8UbKPwG29aqhukqljOgLgq0WsHWsLOXkSutOieFrfBSOmfNY7ROJRTJfrSX+seNQmm3Zv1ion8=) 2026-03-01 00:25:46.543047 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILE8du+zrCjdcvcnzdKuU2SS2d8T4iZYVc/+D/NwRNHR) 2026-03-01 00:25:46.543057 | orchestrator | 2026-03-01 00:25:46.543068 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:46.543079 | orchestrator | Sunday 01 March 2026 00:25:45 +0000 (0:00:01.013) 0:00:21.887 ********** 2026-03-01 00:25:46.543090 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINgB/47m6odckUAp6xcYBcrJU3gD3vQIU+QHTLHMIGDu) 2026-03-01 00:25:46.543122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKFsLKigvx7ALAzH9/2SWfrnCu95nbB0Ay3MT4RQ16KKjwxOaGGu3WGEQLES4nesc+rOBhqYOpVHf+s55Swwhcdctw3H9OBSrO6SuLUaoex4vowg8ilS8+BDQfUxWNlogmDBjl4JtLg5qBDRVh1SK5ohM0CqCU0nfGkLTjpdH8NF0yM6vdXXhijBxSC85RRDwdKvhPQnU1uHSsQsAqNq5qfGrXqwBcVETJnzR4zBVisQ8DFtruEbP1YaUfWPuwlNN3Gw3KTEude0yK6mQGF+QAvCWAbroXJy9coEGBCmeBJrw/6KHWqN4n5E9h6yPQybOm0f1YQK4G2v5A9jJrhqIWCKG+JoyqMZtkcxCr6o3P+rne+Apd10pyZoEOM0O+hW0eY8DaS7V7f0FNvJSjWS5YRNUR633CLQeplIri6otJMN3JO7ue+qakDNZ1jy+0swYJ5Qj7tPbtb2GnKUzbO7ugH+KXlLA8UsJ6ouX9wBnHyRWVdQaovkwMpnK0JMoofek=) 2026-03-01 00:25:50.968907 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDPovmG9Tz59zcnV97IYX+IY2FVYMPLcyDDnNY867e399jXwneOVkBlSQTdrZtOBoNU82C7Dz2ltsSNWxF00RU=) 2026-03-01 00:25:50.969041 | orchestrator | 2026-03-01 00:25:50.969071 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:50.969093 | orchestrator | Sunday 01 March 2026 00:25:46 +0000 (0:00:01.009) 0:00:22.897 ********** 2026-03-01 00:25:50.969116 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQxHx+QFymtWN98tkRL1N1FOWSXG3WhBGYQO4PVTMUjCfKhM/rHTcube2NLbA7Upd7khU2HoHrL3Ts+3bohihdoadzN8FDz6AjMxqXqrZogY4nbFYqa4Qvkhge8r/SJHb3G4Q3QgTSdCBNPLaKKX0W9WpRsnJPcuod0r4PNgur9USAzwUjBcQluN9MhwD0loVOek8EE7aUTVP+WpGNARZSipQ9w6wmQSslgWRkGtOT9ASEtdhft7Yw5BrzSkCpZkZ2JixQa6WhdKzKHM5fwadE1zdPT3n0MqjEjuEkSjwmw1zhOt6lwwIRxyI66ZipXWXbEXB0Ltqz8+sDRXzvQImDPUfHZ4o/5IWXvn3JTP8fKo+IvbzNgUJXMhIPSl8WpOp21lDCl0e69HOE+Yo/qAsQ8OT/8XmxAQZyMQ/6XL+CsUfsXUgCwGrXgiXo2q6nF1lNwTA34NktwOVaGYg4oyUtnK6+5pyYbIvcEZI+yRLw70tTXJSJ/iwtS9/+jI1vPWc=) 2026-03-01 00:25:50.969140 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA/52Y+9hIohSCchzEsz10Za+f5S0d0ax5/rL7TbgYBnL9ojxBCRqQVLVqujwv3GGIIxIYGqzMDfs1ihueeHSzA=) 2026-03-01 00:25:50.969195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPfLjR4EeWzLdsnwtreQSWf5BSIF62ztUpPEOP/u0lY) 2026-03-01 00:25:50.969209 | orchestrator | 2026-03-01 00:25:50.969221 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:50.969296 | orchestrator | Sunday 01 March 2026 00:25:47 +0000 (0:00:01.029) 0:00:23.926 ********** 2026-03-01 00:25:50.969311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI4BCwjG+snjfEAjTSROmR1ab9BpTjrgDRXfT3kaLyzyk/19cjBZp6Tc7Yfm96tVre+K4DdUeXwCC1oXE3gIzZk=) 2026-03-01 00:25:50.969324 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYaSMYWZgkHp0n3zVM+YubgbhKbOxSpJfkuxgEKBd6vXp5TpkiEZUEIOtXJq+6dFfNWxyIZ1soJtVmEuS1cMup+zEOqV9sJQeENVvrDMrq0h4e7IUAx4kriKdcF0fl+G8wgfzV+IY8ZChMtBqy21LAg/lfSOfyOSP4ERRAxascGNlDCSO4PNtjr19gjIFLAxtOirft1NKTbo7Aegkeu9e7R3yJXGz0ghAlvRZSA6J7Nn5dWs4sklxjr62UNnZ2AZY+I61gVnmYQN1B88hxwSt4UpMIGugw0A3crxEnoiPTIvmNwbZVk2itTMVPhvtwdVEMmAh9RyA4LXW0SY3mzGa4hW0FGxIi/X2gXK4HT/2/F1Sx+fDWXOSLp3qiKrEt51k0M5hK0NvZ+Jvf33Zr4MPdsU6yFCGQQc+Ov+g4prnp1au5AdtVLBK+8hmLPudMTw8NeBXSCLU1oegwpmkTi1EliSbQDbaGSuOSALppHeOpQodFQ6CkH+LUnjRqIctvbps=) 2026-03-01 00:25:50.969336 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPV+BxtfOCWkkzSQ5FVXzoOBF0XrhK4KBHwnnrP4Cctn) 2026-03-01 00:25:50.969347 | orchestrator | 2026-03-01 00:25:50.969358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-01 00:25:50.969369 | orchestrator | Sunday 01 March 2026 00:25:48 +0000 (0:00:01.037) 0:00:24.964 ********** 2026-03-01 00:25:50.969380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNPTSzKGjNrZlcB9b2cZHqw3lmZMHBV5JDcMnaYrsp41UcFSbCBzv5lOdUKFuIlohTty6KziChSu5ekgFcGgHNw=) 2026-03-01 00:25:50.969392 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8ZVv+jshrUhQgRWzcvhMhbsqcmlJZWSSnNDt+GsOmpqs784ylsqaAFb44AaDBdhrLLOHPIHBUq6eMqnAamosgtrg1Yy0M9EcfqwJonyYvyHZO9kZeay2hjBOXin56HntMvQDpVN9mXxaGUPRY3rqHWckCeMneOVcga3mhoD5UShzyKHxGb0OlHPdht13zCmx9s0PPRGVQLoSRcd2fMzYD6r1vB8fIsIM86e1FfLuzGkPY9VPHz5g0X3UkZu8J7oYera1UYnPsafetRKnejLSgg03m2db8AEgLqUtAV/c0WcuxGyT/xfpB1rLG5/AenxUcfLXfXv0gjWYKH2faS5gxtZmV5gcWbJMkjy29yTdxqrY31sJcdSaa39hicmISl3p0aeUbZ35nfBk7/12mzBCapS1Dxb5ZAaYUtl4mi7SNh1bahSMZikoHXJhnULRYZMxBxKpOFShrUwpPrGSF6nGrdw1FVgub5LZq6WMvMNh38cPuxTnQ/SPas4s4r1eMO7c=) 2026-03-01 00:25:50.969403 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIyao3uNHauGBjgSOwcLmOMwmSyOKoE//HJG8jfpw1ai) 2026-03-01 00:25:50.969417 | orchestrator | 2026-03-01 00:25:50.969429 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-01 00:25:50.969447 | orchestrator | Sunday 01 March 2026 00:25:49 +0000 (0:00:01.018) 0:00:25.982 ********** 2026-03-01 00:25:50.969465 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-01 00:25:50.969485 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-01 00:25:50.969504 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-01 00:25:50.969524 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-01 00:25:50.969568 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-01 00:25:50.969587 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-01 00:25:50.969606 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-01 00:25:50.969626 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:25:50.969645 | orchestrator | 2026-03-01 00:25:50.969663 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-01 00:25:50.969681 | orchestrator | Sunday 01 March 2026 00:25:50 +0000 (0:00:00.152) 0:00:26.134 ********** 2026-03-01 00:25:50.969721 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:25:50.969736 | orchestrator | 2026-03-01 00:25:50.969747 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-01 00:25:50.969770 | orchestrator | Sunday 01 March 2026 00:25:50 +0000 (0:00:00.049) 0:00:26.184 ********** 2026-03-01 00:25:50.969782 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:25:50.969793 | orchestrator | 2026-03-01 00:25:50.969804 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-01 00:25:50.969815 | orchestrator | Sunday 01 March 2026 00:25:50 +0000 (0:00:00.049) 0:00:26.233 ********** 2026-03-01 00:25:50.969826 | orchestrator | changed: [testbed-manager] 2026-03-01 00:25:50.969837 | orchestrator | 2026-03-01 00:25:50.969855 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:25:50.969874 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 00:25:50.969894 | orchestrator | 2026-03-01 00:25:50.969913 | orchestrator | 2026-03-01 00:25:50.969931 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:25:50.969949 | orchestrator | Sunday 01 March 2026 00:25:50 +0000 (0:00:00.605) 0:00:26.839 ********** 2026-03-01 00:25:50.969966 | orchestrator | =============================================================================== 2026-03-01 00:25:50.969984 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.62s 2026-03-01 00:25:50.970002 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.32s 2026-03-01 00:25:50.970095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-01 00:25:50.970116 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-01 00:25:50.970134 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-01 00:25:50.970151 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-01 00:25:50.970169 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-01 00:25:50.970186 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-01 00:25:50.970203 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-01 00:25:50.970221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-01 00:25:50.970263 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-01 00:25:50.970281 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-01 00:25:50.970300 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-01 00:25:50.970335 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-01 00:25:50.970354 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-01 00:25:50.970373 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-01 00:25:50.970390 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.61s 2026-03-01 00:25:50.970408 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-03-01 00:25:50.970429 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-03-01 00:25:50.970448 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-01 00:25:51.151664 | orchestrator | + osism apply squid 2026-03-01 00:26:03.091865 | orchestrator | 2026-03-01 00:26:03 | INFO  | Prepare task for execution of squid. 2026-03-01 00:26:03.174361 | orchestrator | 2026-03-01 00:26:03 | INFO  | Task 0149939b-353e-47f1-87a8-284de3901561 (squid) was prepared for execution. 2026-03-01 00:26:03.174453 | orchestrator | 2026-03-01 00:26:03 | INFO  | It takes a moment until task 0149939b-353e-47f1-87a8-284de3901561 (squid) has been started and output is visible here. 2026-03-01 00:28:00.421997 | orchestrator | 2026-03-01 00:28:00.422131 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-01 00:28:00.422138 | orchestrator | 2026-03-01 00:28:00.422144 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-01 00:28:00.422148 | orchestrator | Sunday 01 March 2026 00:26:07 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-03-01 00:28:00.422153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-01 00:28:00.422160 | orchestrator | 2026-03-01 00:28:00.422164 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-01 00:28:00.422168 | orchestrator | Sunday 01 March 2026 00:26:07 +0000 (0:00:00.065) 0:00:00.203 ********** 2026-03-01 00:28:00.422173 | orchestrator | ok: [testbed-manager] 2026-03-01 00:28:00.422178 | orchestrator | 2026-03-01 00:28:00.422181 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-01 00:28:00.422185 | orchestrator | Sunday 01 March 2026 00:26:08 +0000 (0:00:01.240) 0:00:01.443 ********** 2026-03-01 00:28:00.422190 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-01 00:28:00.422194 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-01 00:28:00.422198 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-01 00:28:00.422202 | orchestrator | 2026-03-01 00:28:00.422206 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-01 00:28:00.422210 | orchestrator | Sunday 01 March 2026 00:26:09 +0000 (0:00:01.076) 0:00:02.520 ********** 2026-03-01 00:28:00.422214 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-01 00:28:00.422218 | orchestrator | 2026-03-01 00:28:00.422222 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-01 00:28:00.422226 | orchestrator | Sunday 01 March 2026 00:26:10 +0000 (0:00:00.966) 0:00:03.486 ********** 2026-03-01 00:28:00.422229 | orchestrator | ok: [testbed-manager] 2026-03-01 00:28:00.422233 | orchestrator | 2026-03-01 00:28:00.422237 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-01 00:28:00.422241 | orchestrator | Sunday 01 March 2026 00:26:10 +0000 (0:00:00.334) 0:00:03.821 ********** 2026-03-01 00:28:00.422245 | orchestrator | changed: [testbed-manager] 2026-03-01 00:28:00.422249 | orchestrator | 2026-03-01 00:28:00.422253 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-01 00:28:00.422256 | orchestrator | Sunday 01 March 2026 00:26:11 +0000 (0:00:00.847) 0:00:04.668 ********** 2026-03-01 00:28:00.422260 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-01 00:28:00.422265 | orchestrator | ok: [testbed-manager] 2026-03-01 00:28:00.422269 | orchestrator | 2026-03-01 00:28:00.422273 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-01 00:28:00.422277 | orchestrator | Sunday 01 March 2026 00:26:47 +0000 (0:00:35.590) 0:00:40.259 ********** 2026-03-01 00:28:00.422304 | orchestrator | changed: [testbed-manager] 2026-03-01 00:28:00.422309 | orchestrator | 2026-03-01 00:28:00.422313 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-01 00:28:00.422356 | orchestrator | Sunday 01 March 2026 00:26:59 +0000 (0:00:12.032) 0:00:52.291 ********** 2026-03-01 00:28:00.422362 | orchestrator | Pausing for 60 seconds 2026-03-01 00:28:00.422366 | orchestrator | changed: [testbed-manager] 2026-03-01 00:28:00.422370 | orchestrator | 2026-03-01 00:28:00.422374 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-01 00:28:00.422378 | orchestrator | Sunday 01 March 2026 00:27:59 +0000 (0:01:00.095) 0:01:52.386 ********** 2026-03-01 00:28:00.422381 | orchestrator | ok: [testbed-manager] 2026-03-01 00:28:00.422385 | orchestrator | 2026-03-01 00:28:00.422389 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-01 00:28:00.422407 | orchestrator | Sunday 01 March 2026 00:27:59 +0000 (0:00:00.076) 0:01:52.463 ********** 2026-03-01 00:28:00.422412 | orchestrator | changed: [testbed-manager] 2026-03-01 00:28:00.422415 | orchestrator | 2026-03-01 00:28:00.422419 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:28:00.422423 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:28:00.422427 | orchestrator | 2026-03-01 00:28:00.422431 | orchestrator | 2026-03-01 00:28:00.422435 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:28:00.422439 | orchestrator | Sunday 01 March 2026 00:28:00 +0000 (0:00:00.613) 0:01:53.076 ********** 2026-03-01 00:28:00.422443 | orchestrator | =============================================================================== 2026-03-01 00:28:00.422446 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-01 00:28:00.422450 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.59s 2026-03-01 00:28:00.422454 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.03s 2026-03-01 00:28:00.422458 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.24s 2026-03-01 00:28:00.422461 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.08s 2026-03-01 00:28:00.422465 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.97s 2026-03-01 00:28:00.422469 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.85s 2026-03-01 00:28:00.422473 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-03-01 00:28:00.422476 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-03-01 00:28:00.422480 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-01 00:28:00.422484 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-01 00:28:00.728892 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-01 00:28:00.728997 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-01 00:28:00.735900 | orchestrator | + set -e 2026-03-01 00:28:00.735973 | orchestrator | + NAMESPACE=kolla 2026-03-01 00:28:00.735981 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-01 00:28:00.742469 | orchestrator | ++ semver latest 9.0.0 2026-03-01 00:28:00.794968 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-01 00:28:00.795040 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-01 00:28:00.795849 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-01 00:28:12.792858 | orchestrator | 2026-03-01 00:28:12 | INFO  | Prepare task for execution of operator. 2026-03-01 00:28:12.861062 | orchestrator | 2026-03-01 00:28:12 | INFO  | Task 12ee4667-da9d-4a10-9b24-e220c168254f (operator) was prepared for execution. 2026-03-01 00:28:12.861150 | orchestrator | 2026-03-01 00:28:12 | INFO  | It takes a moment until task 12ee4667-da9d-4a10-9b24-e220c168254f (operator) has been started and output is visible here. 2026-03-01 00:28:29.349675 | orchestrator | 2026-03-01 00:28:29.349783 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-01 00:28:29.349800 | orchestrator | 2026-03-01 00:28:29.349812 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 00:28:29.349824 | orchestrator | Sunday 01 March 2026 00:28:16 +0000 (0:00:00.138) 0:00:00.138 ********** 2026-03-01 00:28:29.349835 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:28:29.349848 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:28:29.349859 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:28:29.349870 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:28:29.349881 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:28:29.349892 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:28:29.349907 | orchestrator | 2026-03-01 00:28:29.349919 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-01 00:28:29.349956 | orchestrator | Sunday 01 March 2026 00:28:20 +0000 (0:00:03.437) 0:00:03.576 ********** 2026-03-01 00:28:29.349968 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:28:29.349979 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:28:29.349990 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:28:29.350001 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:28:29.350011 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:28:29.350107 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:28:29.350138 | orchestrator | 2026-03-01 00:28:29.350179 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-01 00:28:29.350199 | orchestrator | 2026-03-01 00:28:29.350218 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-01 00:28:29.350237 | orchestrator | Sunday 01 March 2026 00:28:21 +0000 (0:00:00.746) 0:00:04.323 ********** 2026-03-01 00:28:29.350255 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:28:29.350272 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:28:29.350315 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:28:29.350334 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:28:29.350352 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:28:29.350370 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:28:29.350389 | orchestrator | 2026-03-01 00:28:29.350406 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-01 00:28:29.350426 | orchestrator | Sunday 01 March 2026 00:28:21 +0000 (0:00:00.156) 0:00:04.479 ********** 2026-03-01 00:28:29.350444 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:28:29.350463 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:28:29.350483 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:28:29.350501 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:28:29.350543 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:28:29.350564 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:28:29.350583 | orchestrator | 2026-03-01 00:28:29.350600 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-01 00:28:29.350612 | orchestrator | Sunday 01 March 2026 00:28:21 +0000 (0:00:00.159) 0:00:04.638 ********** 2026-03-01 00:28:29.350623 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:28:29.350635 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:28:29.350646 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:28:29.350656 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:28:29.350667 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:28:29.350678 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:28:29.350689 | orchestrator | 2026-03-01 00:28:29.350700 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-01 00:28:29.350711 | orchestrator | Sunday 01 March 2026 00:28:22 +0000 (0:00:00.736) 0:00:05.375 ********** 2026-03-01 00:28:29.350722 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:28:29.350733 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:28:29.350743 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:28:29.350754 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:28:29.350765 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:28:29.350775 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:28:29.350786 | orchestrator | 2026-03-01 00:28:29.350797 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-01 00:28:29.350808 | orchestrator | Sunday 01 March 2026 00:28:23 +0000 (0:00:00.933) 0:00:06.308 ********** 2026-03-01 00:28:29.350819 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-01 00:28:29.350830 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-01 00:28:29.350841 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-01 00:28:29.350852 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-01 00:28:29.350863 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-01 00:28:29.350873 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-01 00:28:29.350884 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-01 00:28:29.350895 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-01 00:28:29.350905 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-01 00:28:29.350929 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-01 00:28:29.350940 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-01 00:28:29.350951 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-01 00:28:29.350961 | orchestrator | 2026-03-01 00:28:29.350972 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-01 00:28:29.350983 | orchestrator | Sunday 01 March 2026 00:28:24 +0000 (0:00:01.299) 0:00:07.607 ********** 2026-03-01 00:28:29.350994 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:28:29.351005 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:28:29.351016 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:28:29.351026 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:28:29.351037 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:28:29.351048 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:28:29.351059 | orchestrator | 2026-03-01 00:28:29.351070 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-01 00:28:29.351082 | orchestrator | Sunday 01 March 2026 00:28:25 +0000 (0:00:01.183) 0:00:08.791 ********** 2026-03-01 00:28:29.351093 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:28:29.351104 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:28:29.351115 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:28:29.351126 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:28:29.351137 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:28:29.351171 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-01 00:28:29.351183 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-01 00:28:29.351194 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-01 00:28:29.351204 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-01 00:28:29.351215 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-01 00:28:29.351226 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-01 00:28:29.351237 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-01 00:28:29.351248 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:28:29.351259 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-01 00:28:29.351270 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-01 00:28:29.351281 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-01 00:28:29.351318 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:28:29.351337 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:28:29.351348 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:28:29.351359 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:28:29.351370 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-01 00:28:29.351381 | orchestrator | 2026-03-01 00:28:29.351393 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-01 00:28:29.351413 | orchestrator | Sunday 01 March 2026 00:28:26 +0000 (0:00:01.337) 0:00:10.129 ********** 2026-03-01 00:28:29.351430 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:29.351446 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:29.351461 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:29.351500 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:29.351518 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:29.351534 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:29.351549 | orchestrator | 2026-03-01 00:28:29.351566 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-01 00:28:29.351595 | orchestrator | Sunday 01 March 2026 00:28:27 +0000 (0:00:00.141) 0:00:10.270 ********** 2026-03-01 00:28:29.351614 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:29.351630 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:29.351647 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:29.351662 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:29.351681 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:29.351698 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:29.351715 | orchestrator | 2026-03-01 00:28:29.351733 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-01 00:28:29.351753 | orchestrator | Sunday 01 March 2026 00:28:27 +0000 (0:00:00.164) 0:00:10.435 ********** 2026-03-01 00:28:29.351771 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:28:29.351791 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:28:29.351806 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:28:29.351817 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:28:29.351827 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:28:29.351838 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:28:29.351849 | orchestrator | 2026-03-01 00:28:29.351860 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-01 00:28:29.351871 | orchestrator | Sunday 01 March 2026 00:28:27 +0000 (0:00:00.703) 0:00:11.138 ********** 2026-03-01 00:28:29.351882 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:29.351893 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:29.351904 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:29.351914 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:29.351925 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:29.351936 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:29.351947 | orchestrator | 2026-03-01 00:28:29.351958 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-01 00:28:29.351969 | orchestrator | Sunday 01 March 2026 00:28:28 +0000 (0:00:00.183) 0:00:11.322 ********** 2026-03-01 00:28:29.351980 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-01 00:28:29.351992 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:28:29.352002 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 00:28:29.352013 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:28:29.352024 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 00:28:29.352035 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:28:29.352046 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-01 00:28:29.352057 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:28:29.352068 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 00:28:29.352079 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 00:28:29.352089 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:28:29.352100 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:28:29.352111 | orchestrator | 2026-03-01 00:28:29.352122 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-01 00:28:29.352133 | orchestrator | Sunday 01 March 2026 00:28:29 +0000 (0:00:00.913) 0:00:12.235 ********** 2026-03-01 00:28:29.352144 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:29.352155 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:29.352186 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:29.352199 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:29.352209 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:29.352231 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:29.352242 | orchestrator | 2026-03-01 00:28:29.352253 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-01 00:28:29.352264 | orchestrator | Sunday 01 March 2026 00:28:29 +0000 (0:00:00.145) 0:00:12.381 ********** 2026-03-01 00:28:29.352275 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:29.352286 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:29.352347 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:29.352358 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:29.352393 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:30.774885 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:30.774973 | orchestrator | 2026-03-01 00:28:30.774984 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-01 00:28:30.774993 | orchestrator | Sunday 01 March 2026 00:28:29 +0000 (0:00:00.140) 0:00:12.522 ********** 2026-03-01 00:28:30.774999 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:30.775005 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:30.775012 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:30.775018 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:30.775025 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:30.775031 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:30.775037 | orchestrator | 2026-03-01 00:28:30.775044 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-01 00:28:30.775050 | orchestrator | Sunday 01 March 2026 00:28:29 +0000 (0:00:00.139) 0:00:12.661 ********** 2026-03-01 00:28:30.775056 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:28:30.775063 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:28:30.775067 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:28:30.775071 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:28:30.775075 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:28:30.775079 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:28:30.775082 | orchestrator | 2026-03-01 00:28:30.775087 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-01 00:28:30.775091 | orchestrator | Sunday 01 March 2026 00:28:30 +0000 (0:00:00.815) 0:00:13.477 ********** 2026-03-01 00:28:30.775094 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:28:30.775098 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:28:30.775102 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:28:30.775106 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:28:30.775109 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:28:30.775113 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:28:30.775117 | orchestrator | 2026-03-01 00:28:30.775121 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:28:30.775126 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 00:28:30.775149 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 00:28:30.775154 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 00:28:30.775158 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 00:28:30.775162 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 00:28:30.775166 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 00:28:30.775170 | orchestrator | 2026-03-01 00:28:30.775173 | orchestrator | 2026-03-01 00:28:30.775177 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:28:30.775181 | orchestrator | Sunday 01 March 2026 00:28:30 +0000 (0:00:00.238) 0:00:13.715 ********** 2026-03-01 00:28:30.775185 | orchestrator | =============================================================================== 2026-03-01 00:28:30.775188 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2026-03-01 00:28:30.775193 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2026-03-01 00:28:30.775197 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.30s 2026-03-01 00:28:30.775214 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-03-01 00:28:30.775218 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.93s 2026-03-01 00:28:30.775222 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.91s 2026-03-01 00:28:30.775226 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.82s 2026-03-01 00:28:30.775229 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2026-03-01 00:28:30.775233 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.74s 2026-03-01 00:28:30.775237 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.70s 2026-03-01 00:28:30.775240 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-03-01 00:28:30.775244 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-01 00:28:30.775248 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-03-01 00:28:30.775252 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-01 00:28:30.775256 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-01 00:28:30.775259 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-01 00:28:30.775263 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-03-01 00:28:30.775267 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-01 00:28:30.775271 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-03-01 00:28:31.070408 | orchestrator | + osism apply --environment custom facts 2026-03-01 00:28:32.990257 | orchestrator | 2026-03-01 00:28:32 | INFO  | Trying to run play facts in environment custom 2026-03-01 00:28:43.021200 | orchestrator | 2026-03-01 00:28:43 | INFO  | Prepare task for execution of facts. 2026-03-01 00:28:43.098718 | orchestrator | 2026-03-01 00:28:43 | INFO  | Task 2555079b-2004-4812-8c33-914dbbfac097 (facts) was prepared for execution. 2026-03-01 00:28:43.098807 | orchestrator | 2026-03-01 00:28:43 | INFO  | It takes a moment until task 2555079b-2004-4812-8c33-914dbbfac097 (facts) has been started and output is visible here. 2026-03-01 00:29:28.499154 | orchestrator | 2026-03-01 00:29:28.499254 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-01 00:29:28.499265 | orchestrator | 2026-03-01 00:29:28.499272 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-01 00:29:28.499280 | orchestrator | Sunday 01 March 2026 00:28:47 +0000 (0:00:00.069) 0:00:00.069 ********** 2026-03-01 00:29:28.499287 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:28.499376 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:28.499386 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:28.499393 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:29:28.499400 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:28.499408 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:29:28.499415 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:29:28.499422 | orchestrator | 2026-03-01 00:29:28.499429 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-01 00:29:28.499436 | orchestrator | Sunday 01 March 2026 00:28:48 +0000 (0:00:01.198) 0:00:01.267 ********** 2026-03-01 00:29:28.499443 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:28.499450 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:29:28.499457 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:28.499464 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:28.499471 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:29:28.499478 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:29:28.499499 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:28.499507 | orchestrator | 2026-03-01 00:29:28.499532 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-01 00:29:28.499538 | orchestrator | 2026-03-01 00:29:28.499545 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-01 00:29:28.499551 | orchestrator | Sunday 01 March 2026 00:28:49 +0000 (0:00:01.101) 0:00:02.368 ********** 2026-03-01 00:29:28.499558 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.499564 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.499570 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.499576 | orchestrator | 2026-03-01 00:29:28.499582 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-01 00:29:28.499589 | orchestrator | Sunday 01 March 2026 00:28:49 +0000 (0:00:00.081) 0:00:02.450 ********** 2026-03-01 00:29:28.499594 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.499600 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.499607 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.499614 | orchestrator | 2026-03-01 00:29:28.499621 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-01 00:29:28.499628 | orchestrator | Sunday 01 March 2026 00:28:49 +0000 (0:00:00.180) 0:00:02.630 ********** 2026-03-01 00:29:28.499635 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.499642 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.499649 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.499656 | orchestrator | 2026-03-01 00:29:28.499662 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-01 00:29:28.499669 | orchestrator | Sunday 01 March 2026 00:28:49 +0000 (0:00:00.201) 0:00:02.832 ********** 2026-03-01 00:29:28.499677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:29:28.499686 | orchestrator | 2026-03-01 00:29:28.499693 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-01 00:29:28.499700 | orchestrator | Sunday 01 March 2026 00:28:50 +0000 (0:00:00.135) 0:00:02.968 ********** 2026-03-01 00:29:28.499707 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.499720 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.499734 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.499748 | orchestrator | 2026-03-01 00:29:28.499759 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-01 00:29:28.499766 | orchestrator | Sunday 01 March 2026 00:28:50 +0000 (0:00:00.386) 0:00:03.355 ********** 2026-03-01 00:29:28.499774 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:29:28.499781 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:29:28.499787 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:29:28.499796 | orchestrator | 2026-03-01 00:29:28.499803 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-01 00:29:28.499810 | orchestrator | Sunday 01 March 2026 00:28:50 +0000 (0:00:00.096) 0:00:03.451 ********** 2026-03-01 00:29:28.499817 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:28.499824 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:28.499832 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:28.499839 | orchestrator | 2026-03-01 00:29:28.499847 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-01 00:29:28.499853 | orchestrator | Sunday 01 March 2026 00:28:51 +0000 (0:00:00.951) 0:00:04.403 ********** 2026-03-01 00:29:28.499861 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.499869 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.499876 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.499883 | orchestrator | 2026-03-01 00:29:28.499890 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-01 00:29:28.499898 | orchestrator | Sunday 01 March 2026 00:28:52 +0000 (0:00:00.434) 0:00:04.837 ********** 2026-03-01 00:29:28.499906 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:28.499913 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:28.499920 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:28.499927 | orchestrator | 2026-03-01 00:29:28.499940 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-01 00:29:28.499948 | orchestrator | Sunday 01 March 2026 00:28:53 +0000 (0:00:01.008) 0:00:05.846 ********** 2026-03-01 00:29:28.499956 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:28.499962 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:28.499969 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:28.499977 | orchestrator | 2026-03-01 00:29:28.499984 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-01 00:29:28.499991 | orchestrator | Sunday 01 March 2026 00:29:10 +0000 (0:00:17.239) 0:00:23.086 ********** 2026-03-01 00:29:28.499998 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:29:28.500005 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:29:28.500013 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:29:28.500019 | orchestrator | 2026-03-01 00:29:28.500026 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-01 00:29:28.500048 | orchestrator | Sunday 01 March 2026 00:29:10 +0000 (0:00:00.091) 0:00:23.177 ********** 2026-03-01 00:29:28.500056 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:28.500062 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:28.500069 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:28.500076 | orchestrator | 2026-03-01 00:29:28.500082 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-01 00:29:28.500089 | orchestrator | Sunday 01 March 2026 00:29:18 +0000 (0:00:08.139) 0:00:31.317 ********** 2026-03-01 00:29:28.500095 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.500102 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.500108 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.500115 | orchestrator | 2026-03-01 00:29:28.500121 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-01 00:29:28.500127 | orchestrator | Sunday 01 March 2026 00:29:18 +0000 (0:00:00.456) 0:00:31.773 ********** 2026-03-01 00:29:28.500134 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-01 00:29:28.500142 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-01 00:29:28.500148 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-01 00:29:28.500155 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-01 00:29:28.500161 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-01 00:29:28.500168 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-01 00:29:28.500175 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-01 00:29:28.500182 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-01 00:29:28.500188 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-01 00:29:28.500195 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-01 00:29:28.500201 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-01 00:29:28.500208 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-01 00:29:28.500214 | orchestrator | 2026-03-01 00:29:28.500220 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-01 00:29:28.500226 | orchestrator | Sunday 01 March 2026 00:29:22 +0000 (0:00:03.714) 0:00:35.487 ********** 2026-03-01 00:29:28.500233 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.500240 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.500246 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.500253 | orchestrator | 2026-03-01 00:29:28.500259 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-01 00:29:28.500266 | orchestrator | 2026-03-01 00:29:28.500273 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-01 00:29:28.500280 | orchestrator | Sunday 01 March 2026 00:29:24 +0000 (0:00:01.519) 0:00:37.007 ********** 2026-03-01 00:29:28.500286 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:29:28.500316 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:29:28.500324 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:29:28.500331 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:28.500338 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:28.500383 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:28.500391 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:28.500399 | orchestrator | 2026-03-01 00:29:28.500406 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:29:28.500414 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:29:28.500422 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:29:28.500432 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:29:28.500439 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:29:28.500447 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:29:28.500454 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:29:28.500461 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:29:28.500469 | orchestrator | 2026-03-01 00:29:28.500476 | orchestrator | 2026-03-01 00:29:28.500483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:29:28.500490 | orchestrator | Sunday 01 March 2026 00:29:28 +0000 (0:00:04.307) 0:00:41.314 ********** 2026-03-01 00:29:28.500496 | orchestrator | =============================================================================== 2026-03-01 00:29:28.500502 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.24s 2026-03-01 00:29:28.500509 | orchestrator | Install required packages (Debian) -------------------------------------- 8.14s 2026-03-01 00:29:28.500515 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.31s 2026-03-01 00:29:28.500521 | orchestrator | Copy fact files --------------------------------------------------------- 3.71s 2026-03-01 00:29:28.500528 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.52s 2026-03-01 00:29:28.500535 | orchestrator | Create custom facts directory ------------------------------------------- 1.20s 2026-03-01 00:29:28.500549 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2026-03-01 00:29:28.689642 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2026-03-01 00:29:28.689729 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.95s 2026-03-01 00:29:28.689740 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-01 00:29:28.689747 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2026-03-01 00:29:28.689753 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.39s 2026-03-01 00:29:28.689759 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-03-01 00:29:28.689763 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-03-01 00:29:28.689776 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-03-01 00:29:28.689782 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-03-01 00:29:28.689786 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-01 00:29:28.689804 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-03-01 00:29:28.971421 | orchestrator | + osism apply bootstrap 2026-03-01 00:29:41.001730 | orchestrator | 2026-03-01 00:29:40 | INFO  | Prepare task for execution of bootstrap. 2026-03-01 00:29:41.075578 | orchestrator | 2026-03-01 00:29:41 | INFO  | Task 767d364e-96fc-4518-a16a-e958746fda66 (bootstrap) was prepared for execution. 2026-03-01 00:29:41.075677 | orchestrator | 2026-03-01 00:29:41 | INFO  | It takes a moment until task 767d364e-96fc-4518-a16a-e958746fda66 (bootstrap) has been started and output is visible here. 2026-03-01 00:29:57.229158 | orchestrator | 2026-03-01 00:29:57.229336 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-01 00:29:57.229352 | orchestrator | 2026-03-01 00:29:57.229362 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-01 00:29:57.229372 | orchestrator | Sunday 01 March 2026 00:29:45 +0000 (0:00:00.142) 0:00:00.142 ********** 2026-03-01 00:29:57.229381 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:57.229391 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:57.229400 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:57.229409 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:57.229418 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:29:57.229427 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:29:57.229436 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:29:57.229445 | orchestrator | 2026-03-01 00:29:57.229454 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-01 00:29:57.229463 | orchestrator | 2026-03-01 00:29:57.229472 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-01 00:29:57.229481 | orchestrator | Sunday 01 March 2026 00:29:45 +0000 (0:00:00.233) 0:00:00.375 ********** 2026-03-01 00:29:57.229490 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:29:57.229499 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:29:57.229508 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:29:57.229517 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:57.229526 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:57.229534 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:57.229543 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:57.229552 | orchestrator | 2026-03-01 00:29:57.229561 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-01 00:29:57.229569 | orchestrator | 2026-03-01 00:29:57.229578 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-01 00:29:57.229587 | orchestrator | Sunday 01 March 2026 00:29:49 +0000 (0:00:03.595) 0:00:03.971 ********** 2026-03-01 00:29:57.229598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 00:29:57.229607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 00:29:57.229616 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-01 00:29:57.229624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 00:29:57.229633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-01 00:29:57.229642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-01 00:29:57.229650 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-01 00:29:57.229659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-01 00:29:57.229668 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-01 00:29:57.229677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-01 00:29:57.229685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-01 00:29:57.229696 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-01 00:29:57.229706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-01 00:29:57.229717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-01 00:29:57.229727 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-01 00:29:57.229737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-01 00:29:57.229771 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:29:57.229782 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-01 00:29:57.229792 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-01 00:29:57.229802 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-01 00:29:57.229812 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-01 00:29:57.229822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-01 00:29:57.229832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-01 00:29:57.229842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-01 00:29:57.229852 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:29:57.229864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-01 00:29:57.229879 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-01 00:29:57.229893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-01 00:29:57.229914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-01 00:29:57.229932 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:29:57.229946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-01 00:29:57.229961 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-01 00:29:57.229976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-01 00:29:57.229991 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-01 00:29:57.230005 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-01 00:29:57.230081 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-01 00:29:57.230094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-01 00:29:57.230104 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-01 00:29:57.230116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-01 00:29:57.230126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-01 00:29:57.230169 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-01 00:29:57.230187 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-01 00:29:57.230197 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:29:57.230206 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-01 00:29:57.230215 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-01 00:29:57.230224 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-01 00:29:57.230253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-01 00:29:57.230268 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:29:57.230353 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-01 00:29:57.230371 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-01 00:29:57.230384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-01 00:29:57.230398 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:29:57.230413 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-01 00:29:57.230427 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-01 00:29:57.230442 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-01 00:29:57.230455 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:29:57.230468 | orchestrator | 2026-03-01 00:29:57.230482 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-01 00:29:57.230496 | orchestrator | 2026-03-01 00:29:57.230510 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-01 00:29:57.230526 | orchestrator | Sunday 01 March 2026 00:29:49 +0000 (0:00:00.473) 0:00:04.445 ********** 2026-03-01 00:29:57.230540 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:29:57.230555 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:57.230588 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:29:57.230602 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:57.230618 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:57.230631 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:57.230646 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:29:57.230661 | orchestrator | 2026-03-01 00:29:57.230676 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-01 00:29:57.230691 | orchestrator | Sunday 01 March 2026 00:29:50 +0000 (0:00:01.194) 0:00:05.639 ********** 2026-03-01 00:29:57.230706 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:57.230720 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:29:57.230736 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:29:57.230745 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:29:57.230755 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:29:57.230770 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:29:57.230785 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:29:57.230799 | orchestrator | 2026-03-01 00:29:57.230813 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-01 00:29:57.230827 | orchestrator | Sunday 01 March 2026 00:29:52 +0000 (0:00:01.347) 0:00:06.986 ********** 2026-03-01 00:29:57.230844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:29:57.230860 | orchestrator | 2026-03-01 00:29:57.230876 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-01 00:29:57.230900 | orchestrator | Sunday 01 March 2026 00:29:52 +0000 (0:00:00.276) 0:00:07.263 ********** 2026-03-01 00:29:57.230916 | orchestrator | changed: [testbed-manager] 2026-03-01 00:29:57.230930 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:29:57.230946 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:57.230961 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:57.230975 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:57.230990 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:29:57.231003 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:29:57.231016 | orchestrator | 2026-03-01 00:29:57.231031 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-01 00:29:57.231046 | orchestrator | Sunday 01 March 2026 00:29:54 +0000 (0:00:02.117) 0:00:09.380 ********** 2026-03-01 00:29:57.231061 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:29:57.231076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:29:57.231087 | orchestrator | 2026-03-01 00:29:57.231096 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-01 00:29:57.231105 | orchestrator | Sunday 01 March 2026 00:29:54 +0000 (0:00:00.293) 0:00:09.674 ********** 2026-03-01 00:29:57.231114 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:57.231122 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:29:57.231131 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:57.231140 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:57.231148 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:29:57.231174 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:29:57.231184 | orchestrator | 2026-03-01 00:29:57.231192 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-01 00:29:57.231201 | orchestrator | Sunday 01 March 2026 00:29:56 +0000 (0:00:01.164) 0:00:10.838 ********** 2026-03-01 00:29:57.231210 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:29:57.231219 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:29:57.231227 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:29:57.231236 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:29:57.231245 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:29:57.231253 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:29:57.231274 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:29:57.231282 | orchestrator | 2026-03-01 00:29:57.231321 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-01 00:29:57.231340 | orchestrator | Sunday 01 March 2026 00:29:56 +0000 (0:00:00.631) 0:00:11.470 ********** 2026-03-01 00:29:57.231349 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:29:57.231358 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:29:57.231367 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:29:57.231375 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:29:57.231384 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:29:57.231393 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:29:57.231402 | orchestrator | ok: [testbed-manager] 2026-03-01 00:29:57.231411 | orchestrator | 2026-03-01 00:29:57.231420 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-01 00:29:57.231430 | orchestrator | Sunday 01 March 2026 00:29:57 +0000 (0:00:00.464) 0:00:11.934 ********** 2026-03-01 00:29:57.231439 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:29:57.231448 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:29:57.231470 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:30:09.524485 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:30:09.524589 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:30:09.524600 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:30:09.524608 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:30:09.524615 | orchestrator | 2026-03-01 00:30:09.524625 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-01 00:30:09.524635 | orchestrator | Sunday 01 March 2026 00:29:57 +0000 (0:00:00.199) 0:00:12.134 ********** 2026-03-01 00:30:09.524646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:09.524669 | orchestrator | 2026-03-01 00:30:09.524678 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-01 00:30:09.524687 | orchestrator | Sunday 01 March 2026 00:29:57 +0000 (0:00:00.268) 0:00:12.402 ********** 2026-03-01 00:30:09.524696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:09.524704 | orchestrator | 2026-03-01 00:30:09.524712 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-01 00:30:09.524720 | orchestrator | Sunday 01 March 2026 00:29:57 +0000 (0:00:00.371) 0:00:12.775 ********** 2026-03-01 00:30:09.524727 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.524736 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.524744 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.524752 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.524760 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.524767 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.524775 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.524792 | orchestrator | 2026-03-01 00:30:09.524797 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-01 00:30:09.524803 | orchestrator | Sunday 01 March 2026 00:29:59 +0000 (0:00:01.301) 0:00:14.076 ********** 2026-03-01 00:30:09.524808 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:30:09.524814 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:30:09.524819 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:30:09.524823 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:30:09.524828 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:30:09.524833 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:30:09.524838 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:30:09.524843 | orchestrator | 2026-03-01 00:30:09.524848 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-01 00:30:09.524874 | orchestrator | Sunday 01 March 2026 00:29:59 +0000 (0:00:00.189) 0:00:14.265 ********** 2026-03-01 00:30:09.524879 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.524884 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.524889 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.524894 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.524899 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.524903 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.524908 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.524913 | orchestrator | 2026-03-01 00:30:09.524918 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-01 00:30:09.524923 | orchestrator | Sunday 01 March 2026 00:29:59 +0000 (0:00:00.540) 0:00:14.806 ********** 2026-03-01 00:30:09.524927 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:30:09.524932 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:30:09.524937 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:30:09.524942 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:30:09.524946 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:30:09.524951 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:30:09.524956 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:30:09.524961 | orchestrator | 2026-03-01 00:30:09.524966 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-01 00:30:09.524971 | orchestrator | Sunday 01 March 2026 00:30:00 +0000 (0:00:00.220) 0:00:15.026 ********** 2026-03-01 00:30:09.524976 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:09.524981 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:09.524986 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.524991 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:09.524995 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:09.525000 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:09.525005 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:09.525010 | orchestrator | 2026-03-01 00:30:09.525014 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-01 00:30:09.525019 | orchestrator | Sunday 01 March 2026 00:30:00 +0000 (0:00:00.569) 0:00:15.596 ********** 2026-03-01 00:30:09.525024 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525029 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:09.525034 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:09.525038 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:09.525043 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:09.525048 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:09.525053 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:09.525057 | orchestrator | 2026-03-01 00:30:09.525069 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-01 00:30:09.525074 | orchestrator | Sunday 01 March 2026 00:30:01 +0000 (0:00:01.172) 0:00:16.769 ********** 2026-03-01 00:30:09.525079 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525084 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.525089 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525093 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.525098 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525103 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525108 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.525112 | orchestrator | 2026-03-01 00:30:09.525117 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-01 00:30:09.525122 | orchestrator | Sunday 01 March 2026 00:30:03 +0000 (0:00:01.286) 0:00:18.055 ********** 2026-03-01 00:30:09.525142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:09.525147 | orchestrator | 2026-03-01 00:30:09.525152 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-01 00:30:09.525157 | orchestrator | Sunday 01 March 2026 00:30:03 +0000 (0:00:00.341) 0:00:18.396 ********** 2026-03-01 00:30:09.525166 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:30:09.525171 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:09.525176 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:09.525181 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:09.525186 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:09.525190 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:09.525195 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:09.525200 | orchestrator | 2026-03-01 00:30:09.525204 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-01 00:30:09.525209 | orchestrator | Sunday 01 March 2026 00:30:05 +0000 (0:00:01.491) 0:00:19.888 ********** 2026-03-01 00:30:09.525214 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525219 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525223 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525228 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525233 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.525237 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.525242 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.525247 | orchestrator | 2026-03-01 00:30:09.525252 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-01 00:30:09.525257 | orchestrator | Sunday 01 March 2026 00:30:05 +0000 (0:00:00.222) 0:00:20.111 ********** 2026-03-01 00:30:09.525261 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525266 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525271 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525276 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525280 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.525321 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.525327 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.525332 | orchestrator | 2026-03-01 00:30:09.525337 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-01 00:30:09.525342 | orchestrator | Sunday 01 March 2026 00:30:05 +0000 (0:00:00.245) 0:00:20.356 ********** 2026-03-01 00:30:09.525347 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525351 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525356 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525361 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525365 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.525370 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.525375 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.525380 | orchestrator | 2026-03-01 00:30:09.525385 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-01 00:30:09.525389 | orchestrator | Sunday 01 March 2026 00:30:05 +0000 (0:00:00.229) 0:00:20.586 ********** 2026-03-01 00:30:09.525395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:09.525402 | orchestrator | 2026-03-01 00:30:09.525406 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-01 00:30:09.525411 | orchestrator | Sunday 01 March 2026 00:30:06 +0000 (0:00:00.250) 0:00:20.836 ********** 2026-03-01 00:30:09.525416 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525421 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525425 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525434 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.525442 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.525453 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525462 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.525470 | orchestrator | 2026-03-01 00:30:09.525478 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-01 00:30:09.525485 | orchestrator | Sunday 01 March 2026 00:30:06 +0000 (0:00:00.541) 0:00:21.378 ********** 2026-03-01 00:30:09.525494 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:30:09.525502 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:30:09.525516 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:30:09.525523 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:30:09.525530 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:30:09.525538 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:30:09.525545 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:30:09.525553 | orchestrator | 2026-03-01 00:30:09.525560 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-01 00:30:09.525568 | orchestrator | Sunday 01 March 2026 00:30:06 +0000 (0:00:00.231) 0:00:21.610 ********** 2026-03-01 00:30:09.525575 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525582 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525590 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525597 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525605 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:09.525612 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:09.525619 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:09.525628 | orchestrator | 2026-03-01 00:30:09.525636 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-01 00:30:09.525644 | orchestrator | Sunday 01 March 2026 00:30:07 +0000 (0:00:01.086) 0:00:22.696 ********** 2026-03-01 00:30:09.525651 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525659 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525667 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525675 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525683 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:09.525691 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:09.525699 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:09.525707 | orchestrator | 2026-03-01 00:30:09.525715 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-01 00:30:09.525724 | orchestrator | Sunday 01 March 2026 00:30:08 +0000 (0:00:00.599) 0:00:23.296 ********** 2026-03-01 00:30:09.525730 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:09.525734 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:09.525739 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:09.525744 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:09.525756 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:50.267262 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:50.267460 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:50.267479 | orchestrator | 2026-03-01 00:30:50.267493 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-01 00:30:50.267507 | orchestrator | Sunday 01 March 2026 00:30:09 +0000 (0:00:01.231) 0:00:24.527 ********** 2026-03-01 00:30:50.267518 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.267530 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.267541 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.267552 | orchestrator | changed: [testbed-manager] 2026-03-01 00:30:50.267563 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:50.267573 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:50.267584 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:50.267595 | orchestrator | 2026-03-01 00:30:50.267607 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-01 00:30:50.267639 | orchestrator | Sunday 01 March 2026 00:30:27 +0000 (0:00:17.682) 0:00:42.210 ********** 2026-03-01 00:30:50.267652 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.267663 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.267674 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.267685 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.267697 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.267707 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.267735 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.267756 | orchestrator | 2026-03-01 00:30:50.267768 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-01 00:30:50.267779 | orchestrator | Sunday 01 March 2026 00:30:27 +0000 (0:00:00.161) 0:00:42.371 ********** 2026-03-01 00:30:50.267790 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.267828 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.267841 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.267855 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.267868 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.267880 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.267893 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.267906 | orchestrator | 2026-03-01 00:30:50.267919 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-01 00:30:50.267932 | orchestrator | Sunday 01 March 2026 00:30:27 +0000 (0:00:00.181) 0:00:42.553 ********** 2026-03-01 00:30:50.267945 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.267957 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.267970 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.267983 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.267995 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.268008 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.268021 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.268033 | orchestrator | 2026-03-01 00:30:50.268047 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-01 00:30:50.268059 | orchestrator | Sunday 01 March 2026 00:30:27 +0000 (0:00:00.171) 0:00:42.725 ********** 2026-03-01 00:30:50.268074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:50.268089 | orchestrator | 2026-03-01 00:30:50.268102 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-01 00:30:50.268116 | orchestrator | Sunday 01 March 2026 00:30:28 +0000 (0:00:00.261) 0:00:42.987 ********** 2026-03-01 00:30:50.268128 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.268141 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.268153 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.268163 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.268192 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.268203 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.268214 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.268225 | orchestrator | 2026-03-01 00:30:50.268236 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-01 00:30:50.268247 | orchestrator | Sunday 01 March 2026 00:30:29 +0000 (0:00:01.825) 0:00:44.812 ********** 2026-03-01 00:30:50.268257 | orchestrator | changed: [testbed-manager] 2026-03-01 00:30:50.268269 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:50.268332 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:50.268343 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:50.268355 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:50.268374 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:50.268392 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:50.268410 | orchestrator | 2026-03-01 00:30:50.268431 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-01 00:30:50.268451 | orchestrator | Sunday 01 March 2026 00:30:31 +0000 (0:00:01.127) 0:00:45.940 ********** 2026-03-01 00:30:50.268469 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.268480 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.268491 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.268502 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.268513 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.268523 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.268534 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.268544 | orchestrator | 2026-03-01 00:30:50.268555 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-01 00:30:50.268566 | orchestrator | Sunday 01 March 2026 00:30:31 +0000 (0:00:00.837) 0:00:46.777 ********** 2026-03-01 00:30:50.268583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:50.268605 | orchestrator | 2026-03-01 00:30:50.268616 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-01 00:30:50.268628 | orchestrator | Sunday 01 March 2026 00:30:32 +0000 (0:00:00.268) 0:00:47.046 ********** 2026-03-01 00:30:50.268639 | orchestrator | changed: [testbed-manager] 2026-03-01 00:30:50.268650 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:50.268660 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:50.268671 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:50.268682 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:50.268693 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:50.268703 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:50.268714 | orchestrator | 2026-03-01 00:30:50.268745 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-01 00:30:50.268757 | orchestrator | Sunday 01 March 2026 00:30:33 +0000 (0:00:01.115) 0:00:48.161 ********** 2026-03-01 00:30:50.268768 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:30:50.268778 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:30:50.268789 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:30:50.268799 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:30:50.268810 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:30:50.268820 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:30:50.268831 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:30:50.268842 | orchestrator | 2026-03-01 00:30:50.268852 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-01 00:30:50.268863 | orchestrator | Sunday 01 March 2026 00:30:33 +0000 (0:00:00.217) 0:00:48.379 ********** 2026-03-01 00:30:50.268875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:50.268885 | orchestrator | 2026-03-01 00:30:50.268896 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-01 00:30:50.268907 | orchestrator | Sunday 01 March 2026 00:30:33 +0000 (0:00:00.314) 0:00:48.694 ********** 2026-03-01 00:30:50.268918 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.268928 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.268939 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.268950 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.268960 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.268970 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.268981 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.268991 | orchestrator | 2026-03-01 00:30:50.269002 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-01 00:30:50.269013 | orchestrator | Sunday 01 March 2026 00:30:35 +0000 (0:00:01.750) 0:00:50.444 ********** 2026-03-01 00:30:50.269024 | orchestrator | changed: [testbed-manager] 2026-03-01 00:30:50.269035 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:50.269045 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:50.269056 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:50.269067 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:50.269077 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:50.269088 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:50.269098 | orchestrator | 2026-03-01 00:30:50.269109 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-01 00:30:50.269120 | orchestrator | Sunday 01 March 2026 00:30:36 +0000 (0:00:01.149) 0:00:51.593 ********** 2026-03-01 00:30:50.269131 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:30:50.269141 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:30:50.269152 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:30:50.269162 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:30:50.269173 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:30:50.269183 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:30:50.269201 | orchestrator | changed: [testbed-manager] 2026-03-01 00:30:50.269212 | orchestrator | 2026-03-01 00:30:50.269223 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-01 00:30:50.269233 | orchestrator | Sunday 01 March 2026 00:30:47 +0000 (0:00:10.688) 0:01:02.281 ********** 2026-03-01 00:30:50.269244 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.269255 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.269266 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.269301 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.269313 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.269324 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.269335 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.269345 | orchestrator | 2026-03-01 00:30:50.269356 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-01 00:30:50.269367 | orchestrator | Sunday 01 March 2026 00:30:48 +0000 (0:00:01.188) 0:01:03.470 ********** 2026-03-01 00:30:50.269378 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.269389 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.269406 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.269425 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.269442 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.269461 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.269481 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.269500 | orchestrator | 2026-03-01 00:30:50.269519 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-01 00:30:50.269539 | orchestrator | Sunday 01 March 2026 00:30:49 +0000 (0:00:00.900) 0:01:04.370 ********** 2026-03-01 00:30:50.269556 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.269576 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.269592 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.269608 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.269625 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.269643 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.269659 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.269676 | orchestrator | 2026-03-01 00:30:50.269694 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-01 00:30:50.269711 | orchestrator | Sunday 01 March 2026 00:30:49 +0000 (0:00:00.219) 0:01:04.589 ********** 2026-03-01 00:30:50.269729 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:30:50.269746 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:30:50.269763 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:30:50.269780 | orchestrator | ok: [testbed-manager] 2026-03-01 00:30:50.269808 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:30:50.269826 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:30:50.269843 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:30:50.269855 | orchestrator | 2026-03-01 00:30:50.269865 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-01 00:30:50.269876 | orchestrator | Sunday 01 March 2026 00:30:49 +0000 (0:00:00.213) 0:01:04.803 ********** 2026-03-01 00:30:50.269888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:30:50.269899 | orchestrator | 2026-03-01 00:30:50.269920 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-01 00:33:21.105450 | orchestrator | Sunday 01 March 2026 00:30:50 +0000 (0:00:00.278) 0:01:05.082 ********** 2026-03-01 00:33:21.105529 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:21.105536 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105541 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105545 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105549 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105563 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105567 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105571 | orchestrator | 2026-03-01 00:33:21.105576 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-01 00:33:21.105596 | orchestrator | Sunday 01 March 2026 00:30:52 +0000 (0:00:01.924) 0:01:07.006 ********** 2026-03-01 00:33:21.105601 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:21.105606 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:21.105609 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:21.105613 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:21.105617 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:21.105621 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:21.105625 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:21.105629 | orchestrator | 2026-03-01 00:33:21.105633 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-01 00:33:21.105638 | orchestrator | Sunday 01 March 2026 00:30:52 +0000 (0:00:00.609) 0:01:07.615 ********** 2026-03-01 00:33:21.105642 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105646 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105649 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105653 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:21.105657 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105661 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105665 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105668 | orchestrator | 2026-03-01 00:33:21.105672 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-01 00:33:21.105676 | orchestrator | Sunday 01 March 2026 00:30:53 +0000 (0:00:00.260) 0:01:07.875 ********** 2026-03-01 00:33:21.105680 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:21.105684 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105687 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105691 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105695 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105698 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105702 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105706 | orchestrator | 2026-03-01 00:33:21.105710 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-01 00:33:21.105713 | orchestrator | Sunday 01 March 2026 00:30:54 +0000 (0:00:01.216) 0:01:09.092 ********** 2026-03-01 00:33:21.105717 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:21.105721 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:21.105725 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:21.105728 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:21.105732 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:21.105736 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:21.105740 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:21.105743 | orchestrator | 2026-03-01 00:33:21.105747 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-01 00:33:21.105751 | orchestrator | Sunday 01 March 2026 00:30:56 +0000 (0:00:02.255) 0:01:11.348 ********** 2026-03-01 00:33:21.105755 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:21.105759 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105762 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105766 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105770 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105774 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105778 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105781 | orchestrator | 2026-03-01 00:33:21.105785 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-01 00:33:21.105789 | orchestrator | Sunday 01 March 2026 00:30:58 +0000 (0:00:02.447) 0:01:13.796 ********** 2026-03-01 00:33:21.105793 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:21.105796 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105800 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105804 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105808 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105811 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105815 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105819 | orchestrator | 2026-03-01 00:33:21.105822 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-01 00:33:21.105830 | orchestrator | Sunday 01 March 2026 00:31:39 +0000 (0:00:40.714) 0:01:54.510 ********** 2026-03-01 00:33:21.105834 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:21.105838 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:21.105842 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:21.105845 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:21.105849 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:21.105853 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:21.105857 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:21.105860 | orchestrator | 2026-03-01 00:33:21.105864 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-01 00:33:21.105868 | orchestrator | Sunday 01 March 2026 00:33:06 +0000 (0:01:26.722) 0:03:21.233 ********** 2026-03-01 00:33:21.105872 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:21.105876 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105879 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105883 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105887 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105891 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105895 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105898 | orchestrator | 2026-03-01 00:33:21.105902 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-01 00:33:21.105906 | orchestrator | Sunday 01 March 2026 00:33:08 +0000 (0:00:02.067) 0:03:23.300 ********** 2026-03-01 00:33:21.105910 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:21.105914 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:21.105918 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:21.105922 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:21.105926 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:21.105929 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:21.105933 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:21.105937 | orchestrator | 2026-03-01 00:33:21.105941 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-01 00:33:21.105945 | orchestrator | Sunday 01 March 2026 00:33:19 +0000 (0:00:11.400) 0:03:34.701 ********** 2026-03-01 00:33:21.105970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-01 00:33:21.105981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-01 00:33:21.105987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-01 00:33:21.105994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-01 00:33:21.106006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-01 00:33:21.106052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-01 00:33:21.106063 | orchestrator | 2026-03-01 00:33:21.106070 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-01 00:33:21.106078 | orchestrator | Sunday 01 March 2026 00:33:20 +0000 (0:00:00.394) 0:03:35.095 ********** 2026-03-01 00:33:21.106083 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-01 00:33:21.106087 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:33:21.106092 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-01 00:33:21.106096 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-01 00:33:21.106100 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:33:21.106105 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:33:21.106109 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-01 00:33:21.106114 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:21.106118 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-01 00:33:21.106130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-01 00:33:21.106134 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-01 00:33:21.106159 | orchestrator | 2026-03-01 00:33:21.106164 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-01 00:33:21.106171 | orchestrator | Sunday 01 March 2026 00:33:21 +0000 (0:00:00.765) 0:03:35.861 ********** 2026-03-01 00:33:21.106178 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-01 00:33:21.106186 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-01 00:33:21.106192 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-01 00:33:21.106198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-01 00:33:21.106204 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-01 00:33:21.106215 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-01 00:33:27.225075 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-01 00:33:27.225213 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-01 00:33:27.225231 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-01 00:33:27.225243 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-01 00:33:27.225255 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-01 00:33:27.225266 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-01 00:33:27.225277 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-01 00:33:27.225288 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-01 00:33:27.225320 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-01 00:33:27.225333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-01 00:33:27.225344 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-01 00:33:27.225356 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-01 00:33:27.225366 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-01 00:33:27.225377 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-01 00:33:27.225388 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-01 00:33:27.225399 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-01 00:33:27.225410 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-01 00:33:27.225422 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:33:27.225434 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-01 00:33:27.225445 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-01 00:33:27.225456 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-01 00:33:27.225466 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-01 00:33:27.225477 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-01 00:33:27.225488 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-01 00:33:27.225499 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-01 00:33:27.225510 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-01 00:33:27.225521 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:33:27.225532 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-01 00:33:27.225543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-01 00:33:27.225554 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-01 00:33:27.225565 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-01 00:33:27.225575 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-01 00:33:27.225586 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:33:27.225597 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-01 00:33:27.225608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-01 00:33:27.225619 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-01 00:33:27.225644 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-01 00:33:27.225656 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:27.225667 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-01 00:33:27.225678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-01 00:33:27.225689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-01 00:33:27.225708 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-01 00:33:27.225719 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-01 00:33:27.225748 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-01 00:33:27.225760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-01 00:33:27.225771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-01 00:33:27.225782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-01 00:33:27.225793 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-01 00:33:27.225804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-01 00:33:27.225815 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-01 00:33:27.225826 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-01 00:33:27.225837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-01 00:33:27.225847 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-01 00:33:27.225858 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-01 00:33:27.225869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-01 00:33:27.225880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-01 00:33:27.225891 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-01 00:33:27.225902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-01 00:33:27.225912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-01 00:33:27.225923 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-01 00:33:27.225934 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-01 00:33:27.225945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-01 00:33:27.225956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-01 00:33:27.225967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-01 00:33:27.225978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-01 00:33:27.225989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-01 00:33:27.226000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-01 00:33:27.226011 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-01 00:33:27.226080 | orchestrator | 2026-03-01 00:33:27.226093 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-01 00:33:27.226104 | orchestrator | Sunday 01 March 2026 00:33:26 +0000 (0:00:05.132) 0:03:40.993 ********** 2026-03-01 00:33:27.226115 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226126 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226157 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226168 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-01 00:33:27.226220 | orchestrator | 2026-03-01 00:33:27.226231 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-01 00:33:27.226241 | orchestrator | Sunday 01 March 2026 00:33:26 +0000 (0:00:00.615) 0:03:41.609 ********** 2026-03-01 00:33:27.226252 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:27.226263 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:27.226280 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:27.226291 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:27.226302 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:33:27.226313 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:33:27.226324 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:27.226335 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:33:27.226346 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-01 00:33:27.226357 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-01 00:33:27.226384 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-01 00:33:39.765976 | orchestrator | 2026-03-01 00:33:39.766175 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-01 00:33:39.766189 | orchestrator | Sunday 01 March 2026 00:33:27 +0000 (0:00:00.458) 0:03:42.067 ********** 2026-03-01 00:33:39.766194 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:39.766199 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:33:39.766205 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:39.766209 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:39.766213 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:33:39.766218 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:33:39.766222 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-01 00:33:39.766226 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:39.766229 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-01 00:33:39.766233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-01 00:33:39.766237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-01 00:33:39.766241 | orchestrator | 2026-03-01 00:33:39.766245 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-01 00:33:39.766249 | orchestrator | Sunday 01 March 2026 00:33:27 +0000 (0:00:00.606) 0:03:42.674 ********** 2026-03-01 00:33:39.766253 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-01 00:33:39.766256 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:39.766260 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-01 00:33:39.766264 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-01 00:33:39.766268 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:33:39.766289 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:33:39.766293 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-01 00:33:39.766297 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:33:39.766301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-01 00:33:39.766305 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-01 00:33:39.766308 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-01 00:33:39.766312 | orchestrator | 2026-03-01 00:33:39.766316 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-01 00:33:39.766320 | orchestrator | Sunday 01 March 2026 00:33:28 +0000 (0:00:00.534) 0:03:43.208 ********** 2026-03-01 00:33:39.766324 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:33:39.766327 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:33:39.766332 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:33:39.766336 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:39.766340 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:33:39.766343 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:33:39.766347 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:33:39.766351 | orchestrator | 2026-03-01 00:33:39.766355 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-01 00:33:39.766359 | orchestrator | Sunday 01 March 2026 00:33:28 +0000 (0:00:00.241) 0:03:43.449 ********** 2026-03-01 00:33:39.766363 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:39.766367 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:39.766371 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:39.766374 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:39.766378 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:39.766382 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:39.766385 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:39.766389 | orchestrator | 2026-03-01 00:33:39.766393 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-01 00:33:39.766397 | orchestrator | Sunday 01 March 2026 00:33:33 +0000 (0:00:05.228) 0:03:48.678 ********** 2026-03-01 00:33:39.766401 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-01 00:33:39.766405 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-01 00:33:39.766409 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:33:39.766412 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-01 00:33:39.766416 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:33:39.766420 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-01 00:33:39.766424 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:33:39.766428 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:39.766431 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-01 00:33:39.766435 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-01 00:33:39.766439 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:33:39.766443 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:33:39.766446 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-01 00:33:39.766450 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:33:39.766454 | orchestrator | 2026-03-01 00:33:39.766458 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-01 00:33:39.766462 | orchestrator | Sunday 01 March 2026 00:33:34 +0000 (0:00:00.247) 0:03:48.925 ********** 2026-03-01 00:33:39.766465 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-01 00:33:39.766470 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-01 00:33:39.766473 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-01 00:33:39.766488 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-01 00:33:39.766492 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-01 00:33:39.766496 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-01 00:33:39.766505 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-01 00:33:39.766509 | orchestrator | 2026-03-01 00:33:39.766512 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-01 00:33:39.766516 | orchestrator | Sunday 01 March 2026 00:33:35 +0000 (0:00:01.141) 0:03:50.066 ********** 2026-03-01 00:33:39.766522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:33:39.766527 | orchestrator | 2026-03-01 00:33:39.766531 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-01 00:33:39.766534 | orchestrator | Sunday 01 March 2026 00:33:35 +0000 (0:00:00.317) 0:03:50.384 ********** 2026-03-01 00:33:39.766538 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:39.766543 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:39.766547 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:39.766552 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:39.766556 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:39.766561 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:39.766565 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:39.766569 | orchestrator | 2026-03-01 00:33:39.766574 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-01 00:33:39.766578 | orchestrator | Sunday 01 March 2026 00:33:37 +0000 (0:00:01.801) 0:03:52.185 ********** 2026-03-01 00:33:39.766583 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:39.766587 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:39.766592 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:39.766596 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:39.766600 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:39.766605 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:39.766609 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:39.766613 | orchestrator | 2026-03-01 00:33:39.766618 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-01 00:33:39.766622 | orchestrator | Sunday 01 March 2026 00:33:37 +0000 (0:00:00.600) 0:03:52.786 ********** 2026-03-01 00:33:39.766627 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:39.766643 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:39.766648 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:39.766652 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:39.766657 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:39.766661 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:39.766666 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:39.766670 | orchestrator | 2026-03-01 00:33:39.766675 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-01 00:33:39.766679 | orchestrator | Sunday 01 March 2026 00:33:38 +0000 (0:00:00.649) 0:03:53.435 ********** 2026-03-01 00:33:39.766684 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:39.766688 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:39.766692 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:39.766697 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:39.766701 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:39.766706 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:39.766718 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:39.766722 | orchestrator | 2026-03-01 00:33:39.766727 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-01 00:33:39.766737 | orchestrator | Sunday 01 March 2026 00:33:39 +0000 (0:00:00.620) 0:03:54.055 ********** 2026-03-01 00:33:39.766744 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323542.7194016, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:39.766766 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323508.896728, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:39.766771 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323540.9069128, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:39.766788 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323563.754288, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104771 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323555.932104, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104889 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323559.5477092, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104907 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772323555.582656, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104920 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104958 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104985 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.104997 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.105029 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.105042 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.105053 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 00:33:45.105065 | orchestrator | 2026-03-01 00:33:45.105079 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-01 00:33:45.105093 | orchestrator | Sunday 01 March 2026 00:33:40 +0000 (0:00:01.037) 0:03:55.093 ********** 2026-03-01 00:33:45.105104 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:45.105154 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:45.105166 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:45.105186 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:45.105197 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:45.105208 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:45.105219 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:45.105230 | orchestrator | 2026-03-01 00:33:45.105242 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-01 00:33:45.105253 | orchestrator | Sunday 01 March 2026 00:33:41 +0000 (0:00:01.139) 0:03:56.233 ********** 2026-03-01 00:33:45.105264 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:45.105278 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:45.105290 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:45.105304 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:45.105316 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:45.105329 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:45.105341 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:45.105354 | orchestrator | 2026-03-01 00:33:45.105367 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-01 00:33:45.105380 | orchestrator | Sunday 01 March 2026 00:33:42 +0000 (0:00:01.273) 0:03:57.506 ********** 2026-03-01 00:33:45.105393 | orchestrator | changed: [testbed-manager] 2026-03-01 00:33:45.105406 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:33:45.105419 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:33:45.105433 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:33:45.105446 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:33:45.105458 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:33:45.105471 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:33:45.105484 | orchestrator | 2026-03-01 00:33:45.105497 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-01 00:33:45.105515 | orchestrator | Sunday 01 March 2026 00:33:43 +0000 (0:00:01.101) 0:03:58.607 ********** 2026-03-01 00:33:45.105530 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:33:45.105542 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:33:45.105555 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:33:45.105567 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:33:45.105580 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:33:45.105591 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:33:45.105604 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:33:45.105616 | orchestrator | 2026-03-01 00:33:45.105630 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-01 00:33:45.105641 | orchestrator | Sunday 01 March 2026 00:33:43 +0000 (0:00:00.207) 0:03:58.815 ********** 2026-03-01 00:33:45.105652 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:33:45.105663 | orchestrator | ok: [testbed-manager] 2026-03-01 00:33:45.105674 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:33:45.105684 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:33:45.105695 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:33:45.105706 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:33:45.105725 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:33:45.105744 | orchestrator | 2026-03-01 00:33:45.105762 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-01 00:33:45.105780 | orchestrator | Sunday 01 March 2026 00:33:44 +0000 (0:00:00.726) 0:03:59.541 ********** 2026-03-01 00:33:45.105801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:33:45.105823 | orchestrator | 2026-03-01 00:33:45.105842 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-01 00:33:45.105870 | orchestrator | Sunday 01 March 2026 00:33:45 +0000 (0:00:00.377) 0:03:59.919 ********** 2026-03-01 00:35:08.135337 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.135459 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:08.135480 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:08.135495 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:08.135538 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:08.135553 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:08.135567 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:08.135582 | orchestrator | 2026-03-01 00:35:08.135603 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-01 00:35:08.135618 | orchestrator | Sunday 01 March 2026 00:33:54 +0000 (0:00:09.156) 0:04:09.076 ********** 2026-03-01 00:35:08.135633 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.135646 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.135659 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.135672 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.135685 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.135698 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.135712 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.135725 | orchestrator | 2026-03-01 00:35:08.135738 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-01 00:35:08.135751 | orchestrator | Sunday 01 March 2026 00:33:55 +0000 (0:00:01.458) 0:04:10.534 ********** 2026-03-01 00:35:08.135764 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.135777 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.135790 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.135803 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.135816 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.135830 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.135844 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.135858 | orchestrator | 2026-03-01 00:35:08.135873 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-01 00:35:08.135887 | orchestrator | Sunday 01 March 2026 00:33:56 +0000 (0:00:01.017) 0:04:11.552 ********** 2026-03-01 00:35:08.135899 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.135912 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.135927 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.135966 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.135979 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.135992 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.136032 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.136046 | orchestrator | 2026-03-01 00:35:08.136061 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-01 00:35:08.136075 | orchestrator | Sunday 01 March 2026 00:33:57 +0000 (0:00:00.281) 0:04:11.834 ********** 2026-03-01 00:35:08.136089 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.136103 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.136118 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.136131 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.136144 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.136158 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.136172 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.136186 | orchestrator | 2026-03-01 00:35:08.136199 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-01 00:35:08.136215 | orchestrator | Sunday 01 March 2026 00:33:57 +0000 (0:00:00.294) 0:04:12.128 ********** 2026-03-01 00:35:08.136229 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.136244 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.136258 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.136272 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.136285 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.136299 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.136313 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.136326 | orchestrator | 2026-03-01 00:35:08.136353 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-01 00:35:08.136378 | orchestrator | Sunday 01 March 2026 00:33:57 +0000 (0:00:00.290) 0:04:12.418 ********** 2026-03-01 00:35:08.136392 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.136405 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.136418 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.136445 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.136458 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.136471 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.136484 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.136497 | orchestrator | 2026-03-01 00:35:08.136510 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-01 00:35:08.136523 | orchestrator | Sunday 01 March 2026 00:34:03 +0000 (0:00:05.903) 0:04:18.322 ********** 2026-03-01 00:35:08.136539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:35:08.136555 | orchestrator | 2026-03-01 00:35:08.136568 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-01 00:35:08.136581 | orchestrator | Sunday 01 March 2026 00:34:03 +0000 (0:00:00.368) 0:04:18.690 ********** 2026-03-01 00:35:08.136594 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136609 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-01 00:35:08.136623 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136636 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-01 00:35:08.136649 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:08.136660 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136674 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-01 00:35:08.136688 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:08.136700 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:08.136708 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136716 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-01 00:35:08.136724 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136732 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:08.136740 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-01 00:35:08.136748 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:08.136756 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136783 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-01 00:35:08.136792 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:08.136804 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-01 00:35:08.136814 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-01 00:35:08.136822 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:08.136830 | orchestrator | 2026-03-01 00:35:08.136838 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-01 00:35:08.136846 | orchestrator | Sunday 01 March 2026 00:34:04 +0000 (0:00:00.317) 0:04:19.008 ********** 2026-03-01 00:35:08.136854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:35:08.136862 | orchestrator | 2026-03-01 00:35:08.136870 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-01 00:35:08.136878 | orchestrator | Sunday 01 March 2026 00:34:04 +0000 (0:00:00.375) 0:04:19.383 ********** 2026-03-01 00:35:08.136886 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-01 00:35:08.136893 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:08.136901 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-01 00:35:08.136909 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-01 00:35:08.136917 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:08.136925 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:08.136932 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-01 00:35:08.136946 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:08.136954 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-01 00:35:08.136978 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-01 00:35:08.136987 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:08.136995 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:08.137079 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-01 00:35:08.137090 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:08.137098 | orchestrator | 2026-03-01 00:35:08.137106 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-01 00:35:08.137114 | orchestrator | Sunday 01 March 2026 00:34:04 +0000 (0:00:00.321) 0:04:19.705 ********** 2026-03-01 00:35:08.137123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:35:08.137131 | orchestrator | 2026-03-01 00:35:08.137137 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-01 00:35:08.137144 | orchestrator | Sunday 01 March 2026 00:34:05 +0000 (0:00:00.384) 0:04:20.090 ********** 2026-03-01 00:35:08.137151 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:08.137157 | orchestrator | changed: [testbed-manager] 2026-03-01 00:35:08.137164 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:08.137171 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:08.137177 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:08.137184 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:08.137191 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:08.137197 | orchestrator | 2026-03-01 00:35:08.137204 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-01 00:35:08.137211 | orchestrator | Sunday 01 March 2026 00:34:41 +0000 (0:00:36.580) 0:04:56.670 ********** 2026-03-01 00:35:08.137218 | orchestrator | changed: [testbed-manager] 2026-03-01 00:35:08.137224 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:08.137231 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:08.137238 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:08.137244 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:08.137251 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:08.137257 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:08.137264 | orchestrator | 2026-03-01 00:35:08.137275 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-01 00:35:08.137282 | orchestrator | Sunday 01 March 2026 00:34:50 +0000 (0:00:08.791) 0:05:05.462 ********** 2026-03-01 00:35:08.137289 | orchestrator | changed: [testbed-manager] 2026-03-01 00:35:08.137295 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:08.137302 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:08.137309 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:08.137315 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:08.137322 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:08.137328 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:08.137335 | orchestrator | 2026-03-01 00:35:08.137342 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-01 00:35:08.137348 | orchestrator | Sunday 01 March 2026 00:34:59 +0000 (0:00:08.804) 0:05:14.267 ********** 2026-03-01 00:35:08.137355 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:08.137362 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:08.137368 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:08.137375 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:08.137382 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:08.137388 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:08.137395 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:08.137401 | orchestrator | 2026-03-01 00:35:08.137408 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-01 00:35:08.137420 | orchestrator | Sunday 01 March 2026 00:35:01 +0000 (0:00:01.974) 0:05:16.241 ********** 2026-03-01 00:35:08.137427 | orchestrator | changed: [testbed-manager] 2026-03-01 00:35:08.137434 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:08.137440 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:08.137447 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:08.137454 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:08.137460 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:08.137467 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:08.137474 | orchestrator | 2026-03-01 00:35:08.137487 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-01 00:35:19.343658 | orchestrator | Sunday 01 March 2026 00:35:08 +0000 (0:00:06.705) 0:05:22.947 ********** 2026-03-01 00:35:19.343778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:35:19.343798 | orchestrator | 2026-03-01 00:35:19.343811 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-01 00:35:19.343822 | orchestrator | Sunday 01 March 2026 00:35:08 +0000 (0:00:00.377) 0:05:23.324 ********** 2026-03-01 00:35:19.343834 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:19.343842 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:19.343848 | orchestrator | changed: [testbed-manager] 2026-03-01 00:35:19.343855 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:19.343861 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:19.343867 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:19.343874 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:19.343880 | orchestrator | 2026-03-01 00:35:19.343886 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-01 00:35:19.343893 | orchestrator | Sunday 01 March 2026 00:35:09 +0000 (0:00:00.749) 0:05:24.074 ********** 2026-03-01 00:35:19.343899 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:19.343907 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:19.343913 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:19.343919 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:19.343928 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:19.343938 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:19.343947 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:19.343957 | orchestrator | 2026-03-01 00:35:19.343968 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-01 00:35:19.343978 | orchestrator | Sunday 01 March 2026 00:35:11 +0000 (0:00:02.014) 0:05:26.089 ********** 2026-03-01 00:35:19.344058 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:35:19.344066 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:35:19.344072 | orchestrator | changed: [testbed-manager] 2026-03-01 00:35:19.344079 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:35:19.344085 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:35:19.344091 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:35:19.344097 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:35:19.344103 | orchestrator | 2026-03-01 00:35:19.344110 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-01 00:35:19.344116 | orchestrator | Sunday 01 March 2026 00:35:12 +0000 (0:00:00.853) 0:05:26.942 ********** 2026-03-01 00:35:19.344122 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:19.344128 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:19.344135 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:19.344141 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:19.344147 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:19.344153 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:19.344160 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:19.344166 | orchestrator | 2026-03-01 00:35:19.344172 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-01 00:35:19.344180 | orchestrator | Sunday 01 March 2026 00:35:12 +0000 (0:00:00.247) 0:05:27.190 ********** 2026-03-01 00:35:19.344210 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:19.344218 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:19.344226 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:19.344233 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:19.344240 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:19.344247 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:19.344254 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:19.344262 | orchestrator | 2026-03-01 00:35:19.344269 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-01 00:35:19.344277 | orchestrator | Sunday 01 March 2026 00:35:12 +0000 (0:00:00.354) 0:05:27.545 ********** 2026-03-01 00:35:19.344284 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:19.344292 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:19.344299 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:19.344307 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:19.344314 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:19.344321 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:19.344340 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:19.344348 | orchestrator | 2026-03-01 00:35:19.344355 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-01 00:35:19.344362 | orchestrator | Sunday 01 March 2026 00:35:13 +0000 (0:00:00.291) 0:05:27.836 ********** 2026-03-01 00:35:19.344370 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:19.344377 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:19.344384 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:19.344391 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:19.344398 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:19.344405 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:19.344412 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:19.344419 | orchestrator | 2026-03-01 00:35:19.344426 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-01 00:35:19.344434 | orchestrator | Sunday 01 March 2026 00:35:13 +0000 (0:00:00.252) 0:05:28.089 ********** 2026-03-01 00:35:19.344441 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:19.344448 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:19.344455 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:19.344462 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:19.344469 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:19.344476 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:19.344483 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:19.344489 | orchestrator | 2026-03-01 00:35:19.344497 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-01 00:35:19.344504 | orchestrator | Sunday 01 March 2026 00:35:13 +0000 (0:00:00.297) 0:05:28.386 ********** 2026-03-01 00:35:19.344511 | orchestrator | ok: [testbed-node-3] =>  2026-03-01 00:35:19.344518 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344526 | orchestrator | ok: [testbed-node-4] =>  2026-03-01 00:35:19.344533 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344541 | orchestrator | ok: [testbed-node-5] =>  2026-03-01 00:35:19.344547 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344553 | orchestrator | ok: [testbed-manager] =>  2026-03-01 00:35:19.344559 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344582 | orchestrator | ok: [testbed-node-0] =>  2026-03-01 00:35:19.344589 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344595 | orchestrator | ok: [testbed-node-1] =>  2026-03-01 00:35:19.344602 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344613 | orchestrator | ok: [testbed-node-2] =>  2026-03-01 00:35:19.344623 | orchestrator |  docker_version: 5:27.5.1 2026-03-01 00:35:19.344632 | orchestrator | 2026-03-01 00:35:19.344643 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-01 00:35:19.344655 | orchestrator | Sunday 01 March 2026 00:35:13 +0000 (0:00:00.287) 0:05:28.674 ********** 2026-03-01 00:35:19.344665 | orchestrator | ok: [testbed-node-3] =>  2026-03-01 00:35:19.344687 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344695 | orchestrator | ok: [testbed-node-4] =>  2026-03-01 00:35:19.344702 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344708 | orchestrator | ok: [testbed-node-5] =>  2026-03-01 00:35:19.344714 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344720 | orchestrator | ok: [testbed-manager] =>  2026-03-01 00:35:19.344726 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344733 | orchestrator | ok: [testbed-node-0] =>  2026-03-01 00:35:19.344742 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344752 | orchestrator | ok: [testbed-node-1] =>  2026-03-01 00:35:19.344763 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344772 | orchestrator | ok: [testbed-node-2] =>  2026-03-01 00:35:19.344782 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-01 00:35:19.344791 | orchestrator | 2026-03-01 00:35:19.344802 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-01 00:35:19.344813 | orchestrator | Sunday 01 March 2026 00:35:14 +0000 (0:00:00.290) 0:05:28.965 ********** 2026-03-01 00:35:19.344824 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:19.344834 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:19.344844 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:19.344854 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:19.344861 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:19.344867 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:19.344873 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:19.344879 | orchestrator | 2026-03-01 00:35:19.344885 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-01 00:35:19.344892 | orchestrator | Sunday 01 March 2026 00:35:14 +0000 (0:00:00.255) 0:05:29.220 ********** 2026-03-01 00:35:19.344898 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:19.344904 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:19.344910 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:19.344916 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:35:19.344922 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:35:19.344928 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:35:19.344934 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:35:19.344940 | orchestrator | 2026-03-01 00:35:19.344947 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-01 00:35:19.344953 | orchestrator | Sunday 01 March 2026 00:35:14 +0000 (0:00:00.377) 0:05:29.597 ********** 2026-03-01 00:35:19.344961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:35:19.344969 | orchestrator | 2026-03-01 00:35:19.344975 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-01 00:35:19.344981 | orchestrator | Sunday 01 March 2026 00:35:15 +0000 (0:00:00.381) 0:05:29.979 ********** 2026-03-01 00:35:19.345005 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:19.345012 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:19.345018 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:19.345024 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:19.345031 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:19.345037 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:19.345043 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:19.345049 | orchestrator | 2026-03-01 00:35:19.345055 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-01 00:35:19.345061 | orchestrator | Sunday 01 March 2026 00:35:15 +0000 (0:00:00.802) 0:05:30.782 ********** 2026-03-01 00:35:19.345067 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:35:19.345079 | orchestrator | ok: [testbed-manager] 2026-03-01 00:35:19.345085 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:35:19.345091 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:35:19.345097 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:35:19.345109 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:35:19.345115 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:35:19.345121 | orchestrator | 2026-03-01 00:35:19.345128 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-01 00:35:19.345135 | orchestrator | Sunday 01 March 2026 00:35:18 +0000 (0:00:02.998) 0:05:33.780 ********** 2026-03-01 00:35:19.345141 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-01 00:35:19.345148 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-01 00:35:19.345154 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-01 00:35:19.345160 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-01 00:35:19.345166 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-01 00:35:19.345172 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:35:19.345178 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-01 00:35:19.345185 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-01 00:35:19.345191 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-01 00:35:19.345197 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-01 00:35:19.345203 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:35:19.345209 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-01 00:35:19.345215 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-01 00:35:19.345222 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-01 00:35:19.345228 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:35:19.345234 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-01 00:35:19.345246 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-01 00:36:24.892409 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-01 00:36:24.892500 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:24.892511 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-01 00:36:24.892519 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-01 00:36:24.892526 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-01 00:36:24.892533 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:24.892541 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:24.892548 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-01 00:36:24.892554 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-01 00:36:24.892561 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-01 00:36:24.892568 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:24.892575 | orchestrator | 2026-03-01 00:36:24.892583 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-01 00:36:24.892592 | orchestrator | Sunday 01 March 2026 00:35:19 +0000 (0:00:00.621) 0:05:34.402 ********** 2026-03-01 00:36:24.892599 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.892606 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.892613 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.892620 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.892626 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.892633 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.892640 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.892647 | orchestrator | 2026-03-01 00:36:24.892654 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-01 00:36:24.892660 | orchestrator | Sunday 01 March 2026 00:35:27 +0000 (0:00:07.816) 0:05:42.219 ********** 2026-03-01 00:36:24.892667 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.892674 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.892681 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.892688 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.892694 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.892701 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.892729 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.892736 | orchestrator | 2026-03-01 00:36:24.892743 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-01 00:36:24.892750 | orchestrator | Sunday 01 March 2026 00:35:28 +0000 (0:00:01.061) 0:05:43.280 ********** 2026-03-01 00:36:24.892757 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.892764 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.892770 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.892777 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.892784 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.892790 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.892797 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.892804 | orchestrator | 2026-03-01 00:36:24.892811 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-01 00:36:24.892818 | orchestrator | Sunday 01 March 2026 00:35:37 +0000 (0:00:08.967) 0:05:52.248 ********** 2026-03-01 00:36:24.892825 | orchestrator | changed: [testbed-manager] 2026-03-01 00:36:24.892832 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.892839 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.892846 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.892852 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.892859 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.892866 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.892872 | orchestrator | 2026-03-01 00:36:24.892879 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-01 00:36:24.892886 | orchestrator | Sunday 01 March 2026 00:35:41 +0000 (0:00:03.828) 0:05:56.076 ********** 2026-03-01 00:36:24.892893 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.892970 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.892982 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.892990 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.892998 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893006 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893013 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893022 | orchestrator | 2026-03-01 00:36:24.893030 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-01 00:36:24.893049 | orchestrator | Sunday 01 March 2026 00:35:42 +0000 (0:00:01.636) 0:05:57.713 ********** 2026-03-01 00:36:24.893057 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.893065 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.893073 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.893081 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.893088 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893096 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893104 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893112 | orchestrator | 2026-03-01 00:36:24.893120 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-01 00:36:24.893128 | orchestrator | Sunday 01 March 2026 00:35:44 +0000 (0:00:01.284) 0:05:58.997 ********** 2026-03-01 00:36:24.893136 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:24.893143 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:24.893152 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:24.893159 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:24.893167 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:24.893175 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:24.893183 | orchestrator | changed: [testbed-manager] 2026-03-01 00:36:24.893191 | orchestrator | 2026-03-01 00:36:24.893199 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-01 00:36:24.893207 | orchestrator | Sunday 01 March 2026 00:35:44 +0000 (0:00:00.676) 0:05:59.674 ********** 2026-03-01 00:36:24.893215 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.893223 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893231 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.893245 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.893253 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893261 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.893269 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893277 | orchestrator | 2026-03-01 00:36:24.893285 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-01 00:36:24.893306 | orchestrator | Sunday 01 March 2026 00:35:55 +0000 (0:00:10.962) 0:06:10.637 ********** 2026-03-01 00:36:24.893314 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.893323 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.893331 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.893339 | orchestrator | changed: [testbed-manager] 2026-03-01 00:36:24.893347 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893355 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893363 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893370 | orchestrator | 2026-03-01 00:36:24.893376 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-01 00:36:24.893383 | orchestrator | Sunday 01 March 2026 00:35:56 +0000 (0:00:00.912) 0:06:11.550 ********** 2026-03-01 00:36:24.893390 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.893396 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.893403 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893410 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.893417 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.893423 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893430 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893436 | orchestrator | 2026-03-01 00:36:24.893443 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-01 00:36:24.893450 | orchestrator | Sunday 01 March 2026 00:36:06 +0000 (0:00:10.157) 0:06:21.707 ********** 2026-03-01 00:36:24.893457 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.893463 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893470 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.893477 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.893483 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.893490 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893496 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893503 | orchestrator | 2026-03-01 00:36:24.893510 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-01 00:36:24.893517 | orchestrator | Sunday 01 March 2026 00:36:18 +0000 (0:00:11.291) 0:06:32.999 ********** 2026-03-01 00:36:24.893524 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-01 00:36:24.893530 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-01 00:36:24.893537 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-01 00:36:24.893544 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-01 00:36:24.893551 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-01 00:36:24.893557 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-01 00:36:24.893564 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-01 00:36:24.893571 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-01 00:36:24.893578 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-01 00:36:24.893584 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-01 00:36:24.893591 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-01 00:36:24.893598 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-01 00:36:24.893605 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-01 00:36:24.893611 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-01 00:36:24.893618 | orchestrator | 2026-03-01 00:36:24.893625 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-01 00:36:24.893632 | orchestrator | Sunday 01 March 2026 00:36:19 +0000 (0:00:01.248) 0:06:34.248 ********** 2026-03-01 00:36:24.893638 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:24.893650 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:24.893657 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:24.893664 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:24.893670 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:24.893677 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:24.893683 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:24.893690 | orchestrator | 2026-03-01 00:36:24.893697 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-01 00:36:24.893704 | orchestrator | Sunday 01 March 2026 00:36:19 +0000 (0:00:00.523) 0:06:34.771 ********** 2026-03-01 00:36:24.893711 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:24.893717 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:24.893724 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:24.893731 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:24.893737 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:24.893744 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:24.893751 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:24.893757 | orchestrator | 2026-03-01 00:36:24.893764 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-01 00:36:24.893772 | orchestrator | Sunday 01 March 2026 00:36:23 +0000 (0:00:04.047) 0:06:38.818 ********** 2026-03-01 00:36:24.893778 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:24.893785 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:24.893792 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:24.893799 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:24.893805 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:24.893812 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:24.893818 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:24.893825 | orchestrator | 2026-03-01 00:36:24.893833 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-01 00:36:24.893840 | orchestrator | Sunday 01 March 2026 00:36:24 +0000 (0:00:00.631) 0:06:39.450 ********** 2026-03-01 00:36:24.893847 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-01 00:36:24.893853 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-01 00:36:24.893890 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:24.893897 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-01 00:36:24.893930 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-01 00:36:24.893941 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:24.893951 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-01 00:36:24.893961 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-01 00:36:24.893971 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:24.893989 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-01 00:36:43.709187 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-01 00:36:43.710068 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:43.710107 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-01 00:36:43.710117 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-01 00:36:43.710126 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:43.710135 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-01 00:36:43.710142 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-01 00:36:43.710149 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:43.710156 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-01 00:36:43.710162 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-01 00:36:43.710169 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:43.710175 | orchestrator | 2026-03-01 00:36:43.710184 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-01 00:36:43.710212 | orchestrator | Sunday 01 March 2026 00:36:25 +0000 (0:00:00.551) 0:06:40.002 ********** 2026-03-01 00:36:43.710219 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:43.710226 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:43.710232 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:43.710238 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:43.710244 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:43.710251 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:43.710257 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:43.710263 | orchestrator | 2026-03-01 00:36:43.710270 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-01 00:36:43.710277 | orchestrator | Sunday 01 March 2026 00:36:25 +0000 (0:00:00.502) 0:06:40.504 ********** 2026-03-01 00:36:43.710283 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:43.710289 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:43.710295 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:43.710302 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:43.710308 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:43.710314 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:43.710320 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:43.710327 | orchestrator | 2026-03-01 00:36:43.710333 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-01 00:36:43.710340 | orchestrator | Sunday 01 March 2026 00:36:26 +0000 (0:00:00.458) 0:06:40.962 ********** 2026-03-01 00:36:43.710346 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:36:43.710352 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:36:43.710358 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:36:43.710365 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:43.710371 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:36:43.710377 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:36:43.710383 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:36:43.710390 | orchestrator | 2026-03-01 00:36:43.710396 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-01 00:36:43.710403 | orchestrator | Sunday 01 March 2026 00:36:26 +0000 (0:00:00.501) 0:06:41.464 ********** 2026-03-01 00:36:43.710409 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:36:43.710416 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.710422 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:36:43.710428 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:36:43.710434 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:36:43.710440 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:36:43.710446 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:36:43.710452 | orchestrator | 2026-03-01 00:36:43.710459 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-01 00:36:43.710465 | orchestrator | Sunday 01 March 2026 00:36:28 +0000 (0:00:02.283) 0:06:43.747 ********** 2026-03-01 00:36:43.710473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:36:43.710481 | orchestrator | 2026-03-01 00:36:43.710498 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-01 00:36:43.710505 | orchestrator | Sunday 01 March 2026 00:36:29 +0000 (0:00:00.809) 0:06:44.557 ********** 2026-03-01 00:36:43.710512 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:43.710518 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:43.710524 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:43.710530 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.710536 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:43.710543 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:43.710549 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:43.710556 | orchestrator | 2026-03-01 00:36:43.710562 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-01 00:36:43.710574 | orchestrator | Sunday 01 March 2026 00:36:30 +0000 (0:00:00.827) 0:06:45.385 ********** 2026-03-01 00:36:43.710580 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:43.710586 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:43.710592 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:43.710599 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.710605 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:43.710611 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:43.710617 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:43.710624 | orchestrator | 2026-03-01 00:36:43.710630 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-01 00:36:43.710636 | orchestrator | Sunday 01 March 2026 00:36:31 +0000 (0:00:01.057) 0:06:46.443 ********** 2026-03-01 00:36:43.710643 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:43.710649 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:43.710655 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.710661 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:43.710668 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:43.710674 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:43.710680 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:43.710686 | orchestrator | 2026-03-01 00:36:43.710693 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-01 00:36:43.710718 | orchestrator | Sunday 01 March 2026 00:36:33 +0000 (0:00:01.417) 0:06:47.861 ********** 2026-03-01 00:36:43.710725 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:36:43.710731 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:36:43.710737 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:36:43.710743 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:36:43.710750 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:36:43.710756 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:36:43.710762 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:36:43.710768 | orchestrator | 2026-03-01 00:36:43.710775 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-01 00:36:43.710781 | orchestrator | Sunday 01 March 2026 00:36:34 +0000 (0:00:01.456) 0:06:49.317 ********** 2026-03-01 00:36:43.710787 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:43.710794 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.710800 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:43.710806 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:43.710812 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:43.710818 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:43.710825 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:43.710831 | orchestrator | 2026-03-01 00:36:43.710837 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-01 00:36:43.710844 | orchestrator | Sunday 01 March 2026 00:36:35 +0000 (0:00:01.347) 0:06:50.665 ********** 2026-03-01 00:36:43.710850 | orchestrator | changed: [testbed-manager] 2026-03-01 00:36:43.710856 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:36:43.710862 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:36:43.710868 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:36:43.710911 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:36:43.710918 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:36:43.710925 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:36:43.710931 | orchestrator | 2026-03-01 00:36:43.710937 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-01 00:36:43.710944 | orchestrator | Sunday 01 March 2026 00:36:37 +0000 (0:00:01.409) 0:06:52.075 ********** 2026-03-01 00:36:43.710950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:36:43.710957 | orchestrator | 2026-03-01 00:36:43.710963 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-01 00:36:43.710969 | orchestrator | Sunday 01 March 2026 00:36:38 +0000 (0:00:00.844) 0:06:52.919 ********** 2026-03-01 00:36:43.710985 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:36:43.710992 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:36:43.710998 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.711004 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:36:43.711011 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:36:43.711017 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:36:43.711023 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:36:43.711029 | orchestrator | 2026-03-01 00:36:43.711036 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-01 00:36:43.711042 | orchestrator | Sunday 01 March 2026 00:36:39 +0000 (0:00:01.329) 0:06:54.249 ********** 2026-03-01 00:36:43.711048 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:36:43.711054 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:36:43.711060 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.711067 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:36:43.711073 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:36:43.711079 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:36:43.711085 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:36:43.711092 | orchestrator | 2026-03-01 00:36:43.711098 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-01 00:36:43.711104 | orchestrator | Sunday 01 March 2026 00:36:40 +0000 (0:00:01.092) 0:06:55.342 ********** 2026-03-01 00:36:43.711111 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:36:43.711117 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.711123 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:36:43.711130 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:36:43.711136 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:36:43.711142 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:36:43.711148 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:36:43.711155 | orchestrator | 2026-03-01 00:36:43.711161 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-01 00:36:43.711168 | orchestrator | Sunday 01 March 2026 00:36:41 +0000 (0:00:01.072) 0:06:56.414 ********** 2026-03-01 00:36:43.711174 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:36:43.711181 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:36:43.711187 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:36:43.711193 | orchestrator | ok: [testbed-manager] 2026-03-01 00:36:43.711199 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:36:43.711206 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:36:43.711212 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:36:43.711220 | orchestrator | 2026-03-01 00:36:43.711230 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-01 00:36:43.711241 | orchestrator | Sunday 01 March 2026 00:36:42 +0000 (0:00:01.220) 0:06:57.635 ********** 2026-03-01 00:36:43.711301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:36:43.711313 | orchestrator | 2026-03-01 00:36:43.711323 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:36:43.711333 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.770) 0:06:58.406 ********** 2026-03-01 00:36:43.711343 | orchestrator | 2026-03-01 00:36:43.711353 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:36:43.711363 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.036) 0:06:58.442 ********** 2026-03-01 00:36:43.711374 | orchestrator | 2026-03-01 00:36:43.711384 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:36:43.711395 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.034) 0:06:58.477 ********** 2026-03-01 00:36:43.711405 | orchestrator | 2026-03-01 00:36:43.711415 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:36:43.711433 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.046) 0:06:58.523 ********** 2026-03-01 00:37:09.557539 | orchestrator | 2026-03-01 00:37:09.557679 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:37:09.557723 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.035) 0:06:58.558 ********** 2026-03-01 00:37:09.557736 | orchestrator | 2026-03-01 00:37:09.557747 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:37:09.557758 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.035) 0:06:58.594 ********** 2026-03-01 00:37:09.557769 | orchestrator | 2026-03-01 00:37:09.557780 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-01 00:37:09.557791 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.038) 0:06:58.633 ********** 2026-03-01 00:37:09.557802 | orchestrator | 2026-03-01 00:37:09.557813 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-01 00:37:09.557824 | orchestrator | Sunday 01 March 2026 00:36:43 +0000 (0:00:00.035) 0:06:58.668 ********** 2026-03-01 00:37:09.557835 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:09.557891 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:09.557903 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:09.557914 | orchestrator | 2026-03-01 00:37:09.557925 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-01 00:37:09.557936 | orchestrator | Sunday 01 March 2026 00:36:45 +0000 (0:00:01.290) 0:06:59.959 ********** 2026-03-01 00:37:09.557947 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:09.557959 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:09.557970 | orchestrator | changed: [testbed-manager] 2026-03-01 00:37:09.557980 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:09.557992 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:09.558002 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:09.558014 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:09.558120 | orchestrator | 2026-03-01 00:37:09.558134 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-01 00:37:09.558148 | orchestrator | Sunday 01 March 2026 00:36:46 +0000 (0:00:01.474) 0:07:01.434 ********** 2026-03-01 00:37:09.558161 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:09.558174 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:09.558187 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:09.558200 | orchestrator | changed: [testbed-manager] 2026-03-01 00:37:09.558213 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:09.558225 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:09.558239 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:09.558252 | orchestrator | 2026-03-01 00:37:09.558264 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-01 00:37:09.558277 | orchestrator | Sunday 01 March 2026 00:36:47 +0000 (0:00:01.224) 0:07:02.658 ********** 2026-03-01 00:37:09.558290 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:09.558302 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:09.558315 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:09.558328 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:09.558340 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:09.558352 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:09.558365 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:09.558378 | orchestrator | 2026-03-01 00:37:09.558391 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-01 00:37:09.558404 | orchestrator | Sunday 01 March 2026 00:36:50 +0000 (0:00:02.375) 0:07:05.033 ********** 2026-03-01 00:37:09.558417 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:09.558430 | orchestrator | 2026-03-01 00:37:09.558443 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-01 00:37:09.558455 | orchestrator | Sunday 01 March 2026 00:36:50 +0000 (0:00:00.098) 0:07:05.131 ********** 2026-03-01 00:37:09.558466 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:09.558476 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:09.558487 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:09.558498 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:09.558509 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:09.558530 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:09.558541 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:09.558552 | orchestrator | 2026-03-01 00:37:09.558563 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-01 00:37:09.558588 | orchestrator | Sunday 01 March 2026 00:36:51 +0000 (0:00:00.950) 0:07:06.082 ********** 2026-03-01 00:37:09.558600 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:09.558611 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:09.558621 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:09.558632 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:09.558643 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:09.558654 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:09.558664 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:09.558675 | orchestrator | 2026-03-01 00:37:09.558687 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-01 00:37:09.558697 | orchestrator | Sunday 01 March 2026 00:36:51 +0000 (0:00:00.598) 0:07:06.680 ********** 2026-03-01 00:37:09.558710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:37:09.558724 | orchestrator | 2026-03-01 00:37:09.558735 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-01 00:37:09.558745 | orchestrator | Sunday 01 March 2026 00:36:52 +0000 (0:00:00.793) 0:07:07.474 ********** 2026-03-01 00:37:09.558756 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:09.558767 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:09.558778 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:09.558789 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:09.558800 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:09.558810 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:09.558821 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:09.558832 | orchestrator | 2026-03-01 00:37:09.558926 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-01 00:37:09.558940 | orchestrator | Sunday 01 March 2026 00:36:53 +0000 (0:00:00.779) 0:07:08.254 ********** 2026-03-01 00:37:09.558951 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-01 00:37:09.558981 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-01 00:37:09.558993 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-01 00:37:09.559004 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-01 00:37:09.559015 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-01 00:37:09.559026 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-01 00:37:09.559037 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-01 00:37:09.559048 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-01 00:37:09.559059 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-01 00:37:09.559070 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-01 00:37:09.559081 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-01 00:37:09.559091 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-01 00:37:09.559102 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-01 00:37:09.559113 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-01 00:37:09.559124 | orchestrator | 2026-03-01 00:37:09.559135 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-01 00:37:09.559146 | orchestrator | Sunday 01 March 2026 00:36:56 +0000 (0:00:02.617) 0:07:10.871 ********** 2026-03-01 00:37:09.559157 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:09.559168 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:09.559178 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:09.559189 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:09.559209 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:09.559220 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:09.559231 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:09.559242 | orchestrator | 2026-03-01 00:37:09.559253 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-01 00:37:09.559264 | orchestrator | Sunday 01 March 2026 00:36:56 +0000 (0:00:00.483) 0:07:11.355 ********** 2026-03-01 00:37:09.559277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:37:09.559290 | orchestrator | 2026-03-01 00:37:09.559301 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-01 00:37:09.559311 | orchestrator | Sunday 01 March 2026 00:36:57 +0000 (0:00:00.776) 0:07:12.131 ********** 2026-03-01 00:37:09.559322 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:09.559333 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:09.559344 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:09.559355 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:09.559366 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:09.559377 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:09.559387 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:09.559398 | orchestrator | 2026-03-01 00:37:09.559409 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-01 00:37:09.559421 | orchestrator | Sunday 01 March 2026 00:36:58 +0000 (0:00:00.826) 0:07:12.958 ********** 2026-03-01 00:37:09.559431 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:09.559442 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:09.559451 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:09.559461 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:09.559470 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:09.559480 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:09.559490 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:09.559499 | orchestrator | 2026-03-01 00:37:09.559509 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-01 00:37:09.559519 | orchestrator | Sunday 01 March 2026 00:36:59 +0000 (0:00:01.035) 0:07:13.994 ********** 2026-03-01 00:37:09.559529 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:09.559538 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:09.559548 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:09.559564 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:09.559574 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:09.559584 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:09.559593 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:09.559603 | orchestrator | 2026-03-01 00:37:09.559613 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-01 00:37:09.559622 | orchestrator | Sunday 01 March 2026 00:36:59 +0000 (0:00:00.488) 0:07:14.483 ********** 2026-03-01 00:37:09.559632 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:09.559642 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:09.559651 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:09.559661 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:09.559671 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:09.559680 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:09.559690 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:09.559699 | orchestrator | 2026-03-01 00:37:09.559709 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-01 00:37:09.559719 | orchestrator | Sunday 01 March 2026 00:37:01 +0000 (0:00:01.522) 0:07:16.005 ********** 2026-03-01 00:37:09.559729 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:09.559739 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:09.559748 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:09.559758 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:09.559768 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:09.559784 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:09.559793 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:09.559803 | orchestrator | 2026-03-01 00:37:09.559813 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-01 00:37:09.559822 | orchestrator | Sunday 01 March 2026 00:37:01 +0000 (0:00:00.496) 0:07:16.501 ********** 2026-03-01 00:37:09.559832 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:09.559861 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:09.559871 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:09.559881 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:09.559891 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:09.559900 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:09.559916 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:40.281600 | orchestrator | 2026-03-01 00:37:40.281736 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-01 00:37:40.281767 | orchestrator | Sunday 01 March 2026 00:37:09 +0000 (0:00:07.926) 0:07:24.428 ********** 2026-03-01 00:37:40.281789 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:40.281886 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.281900 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:40.281912 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:40.281923 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:40.281934 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:40.281945 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:40.281956 | orchestrator | 2026-03-01 00:37:40.281967 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-01 00:37:40.281979 | orchestrator | Sunday 01 March 2026 00:37:10 +0000 (0:00:01.305) 0:07:25.734 ********** 2026-03-01 00:37:40.281990 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.282005 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:40.282102 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:40.282125 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:40.282146 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:40.282167 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:40.282188 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:40.282208 | orchestrator | 2026-03-01 00:37:40.282228 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-01 00:37:40.282248 | orchestrator | Sunday 01 March 2026 00:37:12 +0000 (0:00:01.656) 0:07:27.390 ********** 2026-03-01 00:37:40.282269 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.282289 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:40.282310 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:40.282331 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:40.282346 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:40.282358 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:40.282371 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:40.282384 | orchestrator | 2026-03-01 00:37:40.282396 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-01 00:37:40.282409 | orchestrator | Sunday 01 March 2026 00:37:14 +0000 (0:00:01.534) 0:07:28.925 ********** 2026-03-01 00:37:40.282422 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.282436 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.282449 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.282462 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.282474 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.282487 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.282497 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.282508 | orchestrator | 2026-03-01 00:37:40.282519 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-01 00:37:40.282530 | orchestrator | Sunday 01 March 2026 00:37:15 +0000 (0:00:00.896) 0:07:29.822 ********** 2026-03-01 00:37:40.282541 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:40.282552 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:40.282563 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:40.282606 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:40.282626 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:40.282645 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:40.282663 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:40.282682 | orchestrator | 2026-03-01 00:37:40.282695 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-01 00:37:40.282706 | orchestrator | Sunday 01 March 2026 00:37:15 +0000 (0:00:00.703) 0:07:30.525 ********** 2026-03-01 00:37:40.282717 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:40.282728 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:40.282738 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:40.282749 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:40.282760 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:40.282770 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:40.282781 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:40.282792 | orchestrator | 2026-03-01 00:37:40.282828 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-01 00:37:40.282849 | orchestrator | Sunday 01 March 2026 00:37:16 +0000 (0:00:00.464) 0:07:30.990 ********** 2026-03-01 00:37:40.282867 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.282885 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.282905 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.282924 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.282943 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.282960 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.282976 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.282987 | orchestrator | 2026-03-01 00:37:40.282998 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-01 00:37:40.283009 | orchestrator | Sunday 01 March 2026 00:37:16 +0000 (0:00:00.462) 0:07:31.452 ********** 2026-03-01 00:37:40.283020 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.283031 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.283041 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.283052 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.283063 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.283073 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.283084 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.283094 | orchestrator | 2026-03-01 00:37:40.283105 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-01 00:37:40.283116 | orchestrator | Sunday 01 March 2026 00:37:17 +0000 (0:00:00.562) 0:07:32.014 ********** 2026-03-01 00:37:40.283127 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.283138 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.283148 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.283159 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.283170 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.283180 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.283191 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.283201 | orchestrator | 2026-03-01 00:37:40.283212 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-01 00:37:40.283223 | orchestrator | Sunday 01 March 2026 00:37:17 +0000 (0:00:00.433) 0:07:32.448 ********** 2026-03-01 00:37:40.283234 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.283245 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.283255 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.283266 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.283277 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.283287 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.283299 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.283318 | orchestrator | 2026-03-01 00:37:40.283360 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-01 00:37:40.283380 | orchestrator | Sunday 01 March 2026 00:37:22 +0000 (0:00:04.948) 0:07:37.396 ********** 2026-03-01 00:37:40.283398 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:37:40.283415 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:37:40.283469 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:37:40.283487 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:37:40.283504 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:37:40.283520 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:37:40.283537 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:37:40.283557 | orchestrator | 2026-03-01 00:37:40.283576 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-01 00:37:40.283594 | orchestrator | Sunday 01 March 2026 00:37:23 +0000 (0:00:00.510) 0:07:37.907 ********** 2026-03-01 00:37:40.283614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:37:40.283635 | orchestrator | 2026-03-01 00:37:40.283652 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-01 00:37:40.283670 | orchestrator | Sunday 01 March 2026 00:37:24 +0000 (0:00:00.975) 0:07:38.882 ********** 2026-03-01 00:37:40.283687 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.283703 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.283720 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.283736 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.283753 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.283771 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.283788 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.283843 | orchestrator | 2026-03-01 00:37:40.283864 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-01 00:37:40.283883 | orchestrator | Sunday 01 March 2026 00:37:26 +0000 (0:00:02.063) 0:07:40.946 ********** 2026-03-01 00:37:40.283899 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.283910 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.283921 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.283932 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.283943 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.283953 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.283964 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.283975 | orchestrator | 2026-03-01 00:37:40.283986 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-01 00:37:40.283998 | orchestrator | Sunday 01 March 2026 00:37:27 +0000 (0:00:01.169) 0:07:42.116 ********** 2026-03-01 00:37:40.284009 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:37:40.284020 | orchestrator | ok: [testbed-manager] 2026-03-01 00:37:40.284030 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:37:40.284041 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:37:40.284052 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:37:40.284063 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:37:40.284074 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:37:40.284084 | orchestrator | 2026-03-01 00:37:40.284095 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-01 00:37:40.284106 | orchestrator | Sunday 01 March 2026 00:37:28 +0000 (0:00:00.786) 0:07:42.902 ********** 2026-03-01 00:37:40.284117 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284131 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284142 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284153 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284172 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284183 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284205 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-01 00:37:40.284216 | orchestrator | 2026-03-01 00:37:40.284227 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-01 00:37:40.284239 | orchestrator | Sunday 01 March 2026 00:37:29 +0000 (0:00:01.810) 0:07:44.713 ********** 2026-03-01 00:37:40.284250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:37:40.284261 | orchestrator | 2026-03-01 00:37:40.284273 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-01 00:37:40.284283 | orchestrator | Sunday 01 March 2026 00:37:30 +0000 (0:00:00.675) 0:07:45.388 ********** 2026-03-01 00:37:40.284294 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:37:40.284305 | orchestrator | changed: [testbed-manager] 2026-03-01 00:37:40.284316 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:37:40.284327 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:37:40.284338 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:37:40.284349 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:37:40.284359 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:37:40.284370 | orchestrator | 2026-03-01 00:37:40.284393 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-01 00:38:09.885201 | orchestrator | Sunday 01 March 2026 00:37:40 +0000 (0:00:09.707) 0:07:55.095 ********** 2026-03-01 00:38:09.885313 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:38:09.885331 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:38:09.885345 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:38:09.885358 | orchestrator | ok: [testbed-manager] 2026-03-01 00:38:09.885371 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:38:09.885385 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:38:09.885398 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:38:09.885411 | orchestrator | 2026-03-01 00:38:09.885426 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-01 00:38:09.885439 | orchestrator | Sunday 01 March 2026 00:37:42 +0000 (0:00:01.936) 0:07:57.032 ********** 2026-03-01 00:38:09.885452 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:38:09.885466 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:38:09.885479 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:38:09.885492 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:38:09.885504 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:38:09.885518 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:38:09.885530 | orchestrator | 2026-03-01 00:38:09.885542 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-01 00:38:09.885555 | orchestrator | Sunday 01 March 2026 00:37:43 +0000 (0:00:01.244) 0:07:58.276 ********** 2026-03-01 00:38:09.885569 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.885584 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.885596 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.885609 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.885623 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.885636 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.885651 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.885665 | orchestrator | 2026-03-01 00:38:09.885678 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-01 00:38:09.885691 | orchestrator | 2026-03-01 00:38:09.885705 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-01 00:38:09.885719 | orchestrator | Sunday 01 March 2026 00:37:44 +0000 (0:00:01.391) 0:07:59.668 ********** 2026-03-01 00:38:09.885733 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:38:09.885747 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:38:09.885817 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:38:09.885833 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:38:09.885848 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:38:09.885862 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:38:09.885875 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:38:09.885889 | orchestrator | 2026-03-01 00:38:09.885902 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-01 00:38:09.885915 | orchestrator | 2026-03-01 00:38:09.885929 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-01 00:38:09.885943 | orchestrator | Sunday 01 March 2026 00:37:45 +0000 (0:00:00.486) 0:08:00.155 ********** 2026-03-01 00:38:09.885956 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.885970 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.885983 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.885996 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.886010 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.886085 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.886098 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.886112 | orchestrator | 2026-03-01 00:38:09.886126 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-01 00:38:09.886140 | orchestrator | Sunday 01 March 2026 00:37:46 +0000 (0:00:01.290) 0:08:01.445 ********** 2026-03-01 00:38:09.886162 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:38:09.886175 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:38:09.886188 | orchestrator | ok: [testbed-manager] 2026-03-01 00:38:09.886201 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:38:09.886214 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:38:09.886228 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:38:09.886242 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:38:09.886255 | orchestrator | 2026-03-01 00:38:09.886268 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-01 00:38:09.886281 | orchestrator | Sunday 01 March 2026 00:37:47 +0000 (0:00:01.316) 0:08:02.762 ********** 2026-03-01 00:38:09.886294 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:38:09.886308 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:38:09.886337 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:38:09.886350 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:38:09.886363 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:38:09.886377 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:38:09.886391 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:38:09.886404 | orchestrator | 2026-03-01 00:38:09.886418 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-01 00:38:09.886430 | orchestrator | Sunday 01 March 2026 00:37:48 +0000 (0:00:00.529) 0:08:03.291 ********** 2026-03-01 00:38:09.886444 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:38:09.886459 | orchestrator | 2026-03-01 00:38:09.886473 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-01 00:38:09.886485 | orchestrator | Sunday 01 March 2026 00:37:49 +0000 (0:00:00.700) 0:08:03.992 ********** 2026-03-01 00:38:09.886501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:38:09.886517 | orchestrator | 2026-03-01 00:38:09.886529 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-01 00:38:09.886542 | orchestrator | Sunday 01 March 2026 00:37:49 +0000 (0:00:00.670) 0:08:04.662 ********** 2026-03-01 00:38:09.886555 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.886568 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.886581 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.886594 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.886607 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.886632 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.886645 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.886657 | orchestrator | 2026-03-01 00:38:09.886694 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-01 00:38:09.886710 | orchestrator | Sunday 01 March 2026 00:37:58 +0000 (0:00:08.883) 0:08:13.546 ********** 2026-03-01 00:38:09.886722 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.886734 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.886746 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.886759 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.886805 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.886819 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.886832 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.886845 | orchestrator | 2026-03-01 00:38:09.886859 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-01 00:38:09.886874 | orchestrator | Sunday 01 March 2026 00:37:59 +0000 (0:00:00.873) 0:08:14.420 ********** 2026-03-01 00:38:09.886887 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.886900 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.886914 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.886927 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.886939 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.886952 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.886966 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.886980 | orchestrator | 2026-03-01 00:38:09.886993 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-01 00:38:09.887007 | orchestrator | Sunday 01 March 2026 00:38:00 +0000 (0:00:01.382) 0:08:15.803 ********** 2026-03-01 00:38:09.887020 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.887034 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.887047 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.887060 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.887072 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.887085 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.887098 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.887111 | orchestrator | 2026-03-01 00:38:09.887125 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-01 00:38:09.887138 | orchestrator | Sunday 01 March 2026 00:38:02 +0000 (0:00:01.880) 0:08:17.683 ********** 2026-03-01 00:38:09.887151 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.887164 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.887177 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.887189 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.887203 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.887216 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.887229 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.887242 | orchestrator | 2026-03-01 00:38:09.887255 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-01 00:38:09.887269 | orchestrator | Sunday 01 March 2026 00:38:04 +0000 (0:00:01.227) 0:08:18.911 ********** 2026-03-01 00:38:09.887282 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.887295 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.887308 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.887321 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.887334 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.887347 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.887360 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.887373 | orchestrator | 2026-03-01 00:38:09.887387 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-01 00:38:09.887400 | orchestrator | 2026-03-01 00:38:09.887414 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-01 00:38:09.887427 | orchestrator | Sunday 01 March 2026 00:38:05 +0000 (0:00:01.135) 0:08:20.047 ********** 2026-03-01 00:38:09.887453 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:38:09.887467 | orchestrator | 2026-03-01 00:38:09.887480 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-01 00:38:09.887493 | orchestrator | Sunday 01 March 2026 00:38:06 +0000 (0:00:00.937) 0:08:20.984 ********** 2026-03-01 00:38:09.887506 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:38:09.887520 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:38:09.887541 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:38:09.887555 | orchestrator | ok: [testbed-manager] 2026-03-01 00:38:09.887568 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:38:09.887581 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:38:09.887594 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:38:09.887608 | orchestrator | 2026-03-01 00:38:09.887621 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-01 00:38:09.887635 | orchestrator | Sunday 01 March 2026 00:38:07 +0000 (0:00:00.842) 0:08:21.827 ********** 2026-03-01 00:38:09.887649 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:09.887663 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:09.887676 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:09.887727 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:09.887741 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:09.887753 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:09.887825 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:09.887845 | orchestrator | 2026-03-01 00:38:09.887859 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-01 00:38:09.887871 | orchestrator | Sunday 01 March 2026 00:38:08 +0000 (0:00:01.112) 0:08:22.939 ********** 2026-03-01 00:38:09.887884 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:38:09.887896 | orchestrator | 2026-03-01 00:38:09.887908 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-01 00:38:09.887921 | orchestrator | Sunday 01 March 2026 00:38:09 +0000 (0:00:00.934) 0:08:23.873 ********** 2026-03-01 00:38:09.887936 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:38:09.887950 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:38:09.887964 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:38:09.887979 | orchestrator | ok: [testbed-manager] 2026-03-01 00:38:09.887992 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:38:09.888004 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:38:09.888016 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:38:09.888029 | orchestrator | 2026-03-01 00:38:09.888056 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-01 00:38:11.371261 | orchestrator | Sunday 01 March 2026 00:38:09 +0000 (0:00:00.823) 0:08:24.697 ********** 2026-03-01 00:38:11.371377 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:11.371404 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:11.371417 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:11.371429 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:11.371439 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:11.371450 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:11.371461 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:11.371472 | orchestrator | 2026-03-01 00:38:11.371484 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:38:11.371497 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-01 00:38:11.371510 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-01 00:38:11.371521 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-01 00:38:11.371564 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-01 00:38:11.371576 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-01 00:38:11.371587 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-01 00:38:11.371598 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-01 00:38:11.371609 | orchestrator | 2026-03-01 00:38:11.371620 | orchestrator | 2026-03-01 00:38:11.371632 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:38:11.371643 | orchestrator | Sunday 01 March 2026 00:38:11 +0000 (0:00:01.147) 0:08:25.844 ********** 2026-03-01 00:38:11.371654 | orchestrator | =============================================================================== 2026-03-01 00:38:11.371665 | orchestrator | osism.commons.packages : Install required packages --------------------- 86.72s 2026-03-01 00:38:11.371676 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.71s 2026-03-01 00:38:11.371687 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.58s 2026-03-01 00:38:11.371698 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.68s 2026-03-01 00:38:11.371709 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.40s 2026-03-01 00:38:11.371722 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.29s 2026-03-01 00:38:11.371732 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.96s 2026-03-01 00:38:11.371744 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.69s 2026-03-01 00:38:11.371754 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.16s 2026-03-01 00:38:11.371794 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.71s 2026-03-01 00:38:11.371815 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.16s 2026-03-01 00:38:11.371841 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.97s 2026-03-01 00:38:11.371852 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.88s 2026-03-01 00:38:11.371864 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.80s 2026-03-01 00:38:11.371875 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.79s 2026-03-01 00:38:11.371886 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.93s 2026-03-01 00:38:11.371896 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.82s 2026-03-01 00:38:11.371907 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.71s 2026-03-01 00:38:11.371918 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.90s 2026-03-01 00:38:11.371929 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.23s 2026-03-01 00:38:11.648313 | orchestrator | + osism apply fail2ban 2026-03-01 00:38:24.305804 | orchestrator | 2026-03-01 00:38:24 | INFO  | Prepare task for execution of fail2ban. 2026-03-01 00:38:24.375443 | orchestrator | 2026-03-01 00:38:24 | INFO  | Task 0f9a725c-bc5c-43a3-bb0f-425bf3fa5bf3 (fail2ban) was prepared for execution. 2026-03-01 00:38:24.375540 | orchestrator | 2026-03-01 00:38:24 | INFO  | It takes a moment until task 0f9a725c-bc5c-43a3-bb0f-425bf3fa5bf3 (fail2ban) has been started and output is visible here. 2026-03-01 00:38:45.140845 | orchestrator | 2026-03-01 00:38:45.140963 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-01 00:38:45.141012 | orchestrator | 2026-03-01 00:38:45.141027 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-01 00:38:45.141041 | orchestrator | Sunday 01 March 2026 00:38:28 +0000 (0:00:00.232) 0:00:00.232 ********** 2026-03-01 00:38:45.141057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:38:45.141073 | orchestrator | 2026-03-01 00:38:45.141085 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-01 00:38:45.141098 | orchestrator | Sunday 01 March 2026 00:38:29 +0000 (0:00:01.116) 0:00:01.349 ********** 2026-03-01 00:38:45.141111 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:45.141125 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:45.141137 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:45.141150 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:45.141163 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:45.141177 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:45.141191 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:45.141204 | orchestrator | 2026-03-01 00:38:45.141219 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-01 00:38:45.141233 | orchestrator | Sunday 01 March 2026 00:38:40 +0000 (0:00:10.866) 0:00:12.215 ********** 2026-03-01 00:38:45.141247 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:45.141261 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:45.141275 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:45.141289 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:45.141304 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:45.141318 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:45.141333 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:45.141346 | orchestrator | 2026-03-01 00:38:45.141360 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-01 00:38:45.141375 | orchestrator | Sunday 01 March 2026 00:38:41 +0000 (0:00:01.427) 0:00:13.643 ********** 2026-03-01 00:38:45.141390 | orchestrator | ok: [testbed-manager] 2026-03-01 00:38:45.141405 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:38:45.141420 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:38:45.141434 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:38:45.141449 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:38:45.141463 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:38:45.141478 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:38:45.141492 | orchestrator | 2026-03-01 00:38:45.141506 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-01 00:38:45.141519 | orchestrator | Sunday 01 March 2026 00:38:43 +0000 (0:00:01.435) 0:00:15.078 ********** 2026-03-01 00:38:45.141532 | orchestrator | changed: [testbed-manager] 2026-03-01 00:38:45.141545 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:38:45.141557 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:38:45.141570 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:38:45.141581 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:38:45.141594 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:38:45.141606 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:38:45.141617 | orchestrator | 2026-03-01 00:38:45.141630 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:38:45.141643 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141761 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141784 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141799 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141845 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141859 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141872 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:38:45.141886 | orchestrator | 2026-03-01 00:38:45.141899 | orchestrator | 2026-03-01 00:38:45.141913 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:38:45.141926 | orchestrator | Sunday 01 March 2026 00:38:44 +0000 (0:00:01.584) 0:00:16.663 ********** 2026-03-01 00:38:45.141939 | orchestrator | =============================================================================== 2026-03-01 00:38:45.141952 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.87s 2026-03-01 00:38:45.141966 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.58s 2026-03-01 00:38:45.141979 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.44s 2026-03-01 00:38:45.141993 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.43s 2026-03-01 00:38:45.142001 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.12s 2026-03-01 00:38:45.342550 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-01 00:38:45.342644 | orchestrator | + osism apply network 2026-03-01 00:38:57.150451 | orchestrator | 2026-03-01 00:38:57 | INFO  | Prepare task for execution of network. 2026-03-01 00:38:57.207476 | orchestrator | 2026-03-01 00:38:57 | INFO  | Task 52629fc7-71b1-47cc-a010-ee636a403e73 (network) was prepared for execution. 2026-03-01 00:38:57.207569 | orchestrator | 2026-03-01 00:38:57 | INFO  | It takes a moment until task 52629fc7-71b1-47cc-a010-ee636a403e73 (network) has been started and output is visible here. 2026-03-01 00:39:22.902942 | orchestrator | 2026-03-01 00:39:22.904525 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-01 00:39:22.904565 | orchestrator | 2026-03-01 00:39:22.904573 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-01 00:39:22.904589 | orchestrator | Sunday 01 March 2026 00:39:01 +0000 (0:00:00.187) 0:00:00.187 ********** 2026-03-01 00:39:22.904597 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.904605 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.904611 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.904618 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.904624 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.904630 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.904637 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.904643 | orchestrator | 2026-03-01 00:39:22.904650 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-01 00:39:22.904656 | orchestrator | Sunday 01 March 2026 00:39:01 +0000 (0:00:00.522) 0:00:00.709 ********** 2026-03-01 00:39:22.904665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:39:22.904681 | orchestrator | 2026-03-01 00:39:22.904745 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-01 00:39:22.904753 | orchestrator | Sunday 01 March 2026 00:39:02 +0000 (0:00:00.865) 0:00:01.575 ********** 2026-03-01 00:39:22.904759 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.904765 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.904772 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.904778 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.904784 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.904832 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.904839 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.904846 | orchestrator | 2026-03-01 00:39:22.904852 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-01 00:39:22.904859 | orchestrator | Sunday 01 March 2026 00:39:04 +0000 (0:00:01.912) 0:00:03.488 ********** 2026-03-01 00:39:22.904865 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.904871 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.904877 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.904883 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.904890 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.904896 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.904902 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.904908 | orchestrator | 2026-03-01 00:39:22.904914 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-01 00:39:22.904920 | orchestrator | Sunday 01 March 2026 00:39:06 +0000 (0:00:01.634) 0:00:05.122 ********** 2026-03-01 00:39:22.904927 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-01 00:39:22.904934 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-01 00:39:22.904940 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-01 00:39:22.904946 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-01 00:39:22.904952 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-01 00:39:22.904959 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-01 00:39:22.904965 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-01 00:39:22.904971 | orchestrator | 2026-03-01 00:39:22.904977 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-01 00:39:22.904984 | orchestrator | Sunday 01 March 2026 00:39:07 +0000 (0:00:00.934) 0:00:06.057 ********** 2026-03-01 00:39:22.904990 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-01 00:39:22.905006 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 00:39:22.905012 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-01 00:39:22.905019 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 00:39:22.905025 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-01 00:39:22.905031 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-01 00:39:22.905038 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 00:39:22.905044 | orchestrator | 2026-03-01 00:39:22.905051 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-01 00:39:22.905057 | orchestrator | Sunday 01 March 2026 00:39:10 +0000 (0:00:02.999) 0:00:09.057 ********** 2026-03-01 00:39:22.905064 | orchestrator | changed: [testbed-manager] 2026-03-01 00:39:22.905070 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:39:22.905077 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:39:22.905083 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:39:22.905089 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:39:22.905096 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:39:22.905102 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:39:22.905108 | orchestrator | 2026-03-01 00:39:22.905114 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-01 00:39:22.905121 | orchestrator | Sunday 01 March 2026 00:39:11 +0000 (0:00:01.396) 0:00:10.453 ********** 2026-03-01 00:39:22.905127 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 00:39:22.905133 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-01 00:39:22.905140 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 00:39:22.905169 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 00:39:22.905177 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-01 00:39:22.905183 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-01 00:39:22.905189 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-01 00:39:22.905195 | orchestrator | 2026-03-01 00:39:22.905201 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-01 00:39:22.905208 | orchestrator | Sunday 01 March 2026 00:39:13 +0000 (0:00:01.610) 0:00:12.064 ********** 2026-03-01 00:39:22.905219 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.905225 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.905232 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.905238 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.905244 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.905250 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.905256 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.905262 | orchestrator | 2026-03-01 00:39:22.905269 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-01 00:39:22.905294 | orchestrator | Sunday 01 March 2026 00:39:13 +0000 (0:00:00.975) 0:00:13.039 ********** 2026-03-01 00:39:22.905300 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:22.905306 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:22.905311 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:22.905317 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:22.905322 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:22.905328 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:22.905333 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:22.905338 | orchestrator | 2026-03-01 00:39:22.905344 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-01 00:39:22.905349 | orchestrator | Sunday 01 March 2026 00:39:14 +0000 (0:00:00.586) 0:00:13.626 ********** 2026-03-01 00:39:22.905355 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.905360 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.905366 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.905371 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.905377 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.905382 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.905387 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.905393 | orchestrator | 2026-03-01 00:39:22.905398 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-01 00:39:22.905404 | orchestrator | Sunday 01 March 2026 00:39:16 +0000 (0:00:02.220) 0:00:15.847 ********** 2026-03-01 00:39:22.905409 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:22.905415 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:22.905420 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:22.905426 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:22.905431 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:22.905436 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:22.905443 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-01 00:39:22.906708 | orchestrator | 2026-03-01 00:39:22.906722 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-01 00:39:22.906730 | orchestrator | Sunday 01 March 2026 00:39:17 +0000 (0:00:00.805) 0:00:16.652 ********** 2026-03-01 00:39:22.906736 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.906743 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:39:22.906749 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:39:22.906755 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:39:22.906761 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:39:22.906766 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:39:22.906771 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:39:22.906777 | orchestrator | 2026-03-01 00:39:22.906782 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-01 00:39:22.906788 | orchestrator | Sunday 01 March 2026 00:39:19 +0000 (0:00:01.558) 0:00:18.211 ********** 2026-03-01 00:39:22.906795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:39:22.906802 | orchestrator | 2026-03-01 00:39:22.906808 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-01 00:39:22.906814 | orchestrator | Sunday 01 March 2026 00:39:20 +0000 (0:00:01.073) 0:00:19.284 ********** 2026-03-01 00:39:22.906829 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.906835 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.906840 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.906846 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.906851 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.906857 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.906862 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.906868 | orchestrator | 2026-03-01 00:39:22.906873 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-01 00:39:22.906879 | orchestrator | Sunday 01 March 2026 00:39:21 +0000 (0:00:00.997) 0:00:20.281 ********** 2026-03-01 00:39:22.906884 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:22.906890 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:22.906902 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:22.906907 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:22.906913 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:22.906918 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:22.906924 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:22.906929 | orchestrator | 2026-03-01 00:39:22.906934 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-01 00:39:22.906940 | orchestrator | Sunday 01 March 2026 00:39:21 +0000 (0:00:00.552) 0:00:20.834 ********** 2026-03-01 00:39:22.906946 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.906951 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.906957 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.906962 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.906968 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.906973 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.906979 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.906984 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.906990 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.906995 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.907000 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-01 00:39:22.907006 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.907011 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.907017 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-01 00:39:22.907022 | orchestrator | 2026-03-01 00:39:22.907038 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-01 00:39:37.950796 | orchestrator | Sunday 01 March 2026 00:39:22 +0000 (0:00:01.110) 0:00:21.944 ********** 2026-03-01 00:39:37.950915 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:37.950934 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:37.950947 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:37.950958 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:37.950969 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:37.950980 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:37.950991 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:37.951002 | orchestrator | 2026-03-01 00:39:37.951015 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-01 00:39:37.951026 | orchestrator | Sunday 01 March 2026 00:39:23 +0000 (0:00:00.606) 0:00:22.550 ********** 2026-03-01 00:39:37.951039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-01 00:39:37.951078 | orchestrator | 2026-03-01 00:39:37.951091 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-01 00:39:37.951102 | orchestrator | Sunday 01 March 2026 00:39:27 +0000 (0:00:04.496) 0:00:27.047 ********** 2026-03-01 00:39:37.951114 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951140 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951386 | orchestrator | 2026-03-01 00:39:37.951400 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-01 00:39:37.951413 | orchestrator | Sunday 01 March 2026 00:39:33 +0000 (0:00:05.228) 0:00:32.275 ********** 2026-03-01 00:39:37.951426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951439 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951523 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-01 00:39:37.951549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:37.951607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:50.088836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-01 00:39:50.088983 | orchestrator | 2026-03-01 00:39:50.089004 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-01 00:39:50.089017 | orchestrator | Sunday 01 March 2026 00:39:38 +0000 (0:00:05.001) 0:00:37.276 ********** 2026-03-01 00:39:50.089031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:39:50.089043 | orchestrator | 2026-03-01 00:39:50.089054 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-01 00:39:50.089065 | orchestrator | Sunday 01 March 2026 00:39:39 +0000 (0:00:01.085) 0:00:38.362 ********** 2026-03-01 00:39:50.089076 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:50.089089 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:50.089100 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:50.089111 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:50.089122 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:50.089133 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:50.089144 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:50.089155 | orchestrator | 2026-03-01 00:39:50.089166 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-01 00:39:50.089177 | orchestrator | Sunday 01 March 2026 00:39:40 +0000 (0:00:01.010) 0:00:39.372 ********** 2026-03-01 00:39:50.089188 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089256 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089268 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089280 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089291 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089302 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089316 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089330 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:50.089344 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089357 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089370 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089383 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089396 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:50.089409 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089423 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089453 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089467 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089480 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089518 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:50.089532 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089545 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089558 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089571 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:50.089585 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089599 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089611 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089625 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089637 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089651 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:50.089706 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:50.089717 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-01 00:39:50.089729 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-01 00:39:50.089739 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-01 00:39:50.089750 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-01 00:39:50.089761 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:50.089772 | orchestrator | 2026-03-01 00:39:50.089784 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-01 00:39:50.089815 | orchestrator | Sunday 01 March 2026 00:39:41 +0000 (0:00:00.797) 0:00:40.169 ********** 2026-03-01 00:39:50.089828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:39:50.089839 | orchestrator | 2026-03-01 00:39:50.089850 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-01 00:39:50.089861 | orchestrator | Sunday 01 March 2026 00:39:42 +0000 (0:00:01.149) 0:00:41.318 ********** 2026-03-01 00:39:50.089872 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:50.089883 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:50.089894 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:50.089905 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:50.089916 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:50.089927 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:50.089937 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:50.089948 | orchestrator | 2026-03-01 00:39:50.089959 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-01 00:39:50.089970 | orchestrator | Sunday 01 March 2026 00:39:42 +0000 (0:00:00.595) 0:00:41.914 ********** 2026-03-01 00:39:50.089981 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:50.089992 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:50.090003 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:50.090014 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:50.090112 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:50.090131 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:50.090142 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:50.090153 | orchestrator | 2026-03-01 00:39:50.090165 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-01 00:39:50.090176 | orchestrator | Sunday 01 March 2026 00:39:43 +0000 (0:00:00.692) 0:00:42.607 ********** 2026-03-01 00:39:50.090186 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:50.090208 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:50.090219 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:50.090230 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:50.090240 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:50.090251 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:50.090262 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:50.090273 | orchestrator | 2026-03-01 00:39:50.090284 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-01 00:39:50.090294 | orchestrator | Sunday 01 March 2026 00:39:44 +0000 (0:00:00.528) 0:00:43.136 ********** 2026-03-01 00:39:50.090305 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:50.090316 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:50.090326 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:50.090337 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:50.090348 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:50.090359 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:50.090369 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:50.090380 | orchestrator | 2026-03-01 00:39:50.090391 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-01 00:39:50.090402 | orchestrator | Sunday 01 March 2026 00:39:45 +0000 (0:00:01.620) 0:00:44.756 ********** 2026-03-01 00:39:50.090413 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:50.090424 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:50.090434 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:50.090445 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:50.090456 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:50.090466 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:50.090477 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:50.090487 | orchestrator | 2026-03-01 00:39:50.090498 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-01 00:39:50.090516 | orchestrator | Sunday 01 March 2026 00:39:46 +0000 (0:00:00.962) 0:00:45.719 ********** 2026-03-01 00:39:50.090528 | orchestrator | ok: [testbed-manager] 2026-03-01 00:39:50.090538 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:39:50.090549 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:39:50.090560 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:39:50.090570 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:39:50.090581 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:39:50.090591 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:39:50.090602 | orchestrator | 2026-03-01 00:39:50.090613 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-01 00:39:50.090624 | orchestrator | Sunday 01 March 2026 00:39:48 +0000 (0:00:02.111) 0:00:47.830 ********** 2026-03-01 00:39:50.090635 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:50.090646 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:50.090679 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:50.090691 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:50.090702 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:50.090714 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:50.090725 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:50.090736 | orchestrator | 2026-03-01 00:39:50.090747 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-01 00:39:50.090758 | orchestrator | Sunday 01 March 2026 00:39:49 +0000 (0:00:00.762) 0:00:48.593 ********** 2026-03-01 00:39:50.090769 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:39:50.090780 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:39:50.090791 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:39:50.090802 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:39:50.090812 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:39:50.090823 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:39:50.090834 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:39:50.090845 | orchestrator | 2026-03-01 00:39:50.090856 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:39:50.090869 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-01 00:39:50.090889 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 00:39:50.090910 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 00:39:50.437925 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 00:39:50.438012 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 00:39:50.438069 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 00:39:50.438081 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 00:39:50.438093 | orchestrator | 2026-03-01 00:39:50.438107 | orchestrator | 2026-03-01 00:39:50.438119 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:39:50.438131 | orchestrator | Sunday 01 March 2026 00:39:50 +0000 (0:00:00.539) 0:00:49.133 ********** 2026-03-01 00:39:50.438138 | orchestrator | =============================================================================== 2026-03-01 00:39:50.438145 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.23s 2026-03-01 00:39:50.438152 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.00s 2026-03-01 00:39:50.438159 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.50s 2026-03-01 00:39:50.438165 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.00s 2026-03-01 00:39:50.438172 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.22s 2026-03-01 00:39:50.438179 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.11s 2026-03-01 00:39:50.438185 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.91s 2026-03-01 00:39:50.438192 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2026-03-01 00:39:50.438198 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.62s 2026-03-01 00:39:50.438205 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.61s 2026-03-01 00:39:50.438212 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.56s 2026-03-01 00:39:50.438218 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.40s 2026-03-01 00:39:50.438225 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.15s 2026-03-01 00:39:50.438231 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.11s 2026-03-01 00:39:50.438238 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2026-03-01 00:39:50.438246 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.07s 2026-03-01 00:39:50.438260 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2026-03-01 00:39:50.438277 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2026-03-01 00:39:50.438288 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.98s 2026-03-01 00:39:50.438299 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 0.96s 2026-03-01 00:39:50.752499 | orchestrator | + osism apply wireguard 2026-03-01 00:40:02.737151 | orchestrator | 2026-03-01 00:40:02 | INFO  | Prepare task for execution of wireguard. 2026-03-01 00:40:02.800267 | orchestrator | 2026-03-01 00:40:02 | INFO  | Task a45ded6c-7cd7-497c-be18-7f84ec5c4bd7 (wireguard) was prepared for execution. 2026-03-01 00:40:02.800368 | orchestrator | 2026-03-01 00:40:02 | INFO  | It takes a moment until task a45ded6c-7cd7-497c-be18-7f84ec5c4bd7 (wireguard) has been started and output is visible here. 2026-03-01 00:40:20.924742 | orchestrator | 2026-03-01 00:40:20.925236 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-01 00:40:20.925289 | orchestrator | 2026-03-01 00:40:20.925304 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-01 00:40:20.925317 | orchestrator | Sunday 01 March 2026 00:40:06 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-03-01 00:40:20.925329 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:20.925353 | orchestrator | 2026-03-01 00:40:20.925365 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-01 00:40:20.925377 | orchestrator | Sunday 01 March 2026 00:40:07 +0000 (0:00:01.157) 0:00:01.347 ********** 2026-03-01 00:40:20.925388 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.925401 | orchestrator | 2026-03-01 00:40:20.925415 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-01 00:40:20.925429 | orchestrator | Sunday 01 March 2026 00:40:13 +0000 (0:00:05.857) 0:00:07.204 ********** 2026-03-01 00:40:20.925441 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.925460 | orchestrator | 2026-03-01 00:40:20.925480 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-01 00:40:20.925499 | orchestrator | Sunday 01 March 2026 00:40:14 +0000 (0:00:00.550) 0:00:07.755 ********** 2026-03-01 00:40:20.925517 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.925538 | orchestrator | 2026-03-01 00:40:20.925557 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-01 00:40:20.925576 | orchestrator | Sunday 01 March 2026 00:40:14 +0000 (0:00:00.439) 0:00:08.195 ********** 2026-03-01 00:40:20.925597 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:20.925618 | orchestrator | 2026-03-01 00:40:20.925704 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-01 00:40:20.925725 | orchestrator | Sunday 01 March 2026 00:40:15 +0000 (0:00:00.648) 0:00:08.844 ********** 2026-03-01 00:40:20.925745 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:20.925763 | orchestrator | 2026-03-01 00:40:20.925781 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-01 00:40:20.925801 | orchestrator | Sunday 01 March 2026 00:40:15 +0000 (0:00:00.401) 0:00:09.245 ********** 2026-03-01 00:40:20.925820 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:20.925839 | orchestrator | 2026-03-01 00:40:20.925857 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-01 00:40:20.925876 | orchestrator | Sunday 01 March 2026 00:40:16 +0000 (0:00:00.417) 0:00:09.663 ********** 2026-03-01 00:40:20.925896 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.925916 | orchestrator | 2026-03-01 00:40:20.925935 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-01 00:40:20.925953 | orchestrator | Sunday 01 March 2026 00:40:17 +0000 (0:00:01.131) 0:00:10.795 ********** 2026-03-01 00:40:20.925973 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-01 00:40:20.925987 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.925998 | orchestrator | 2026-03-01 00:40:20.926012 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-01 00:40:20.926097 | orchestrator | Sunday 01 March 2026 00:40:18 +0000 (0:00:00.936) 0:00:11.732 ********** 2026-03-01 00:40:20.926117 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.926134 | orchestrator | 2026-03-01 00:40:20.926152 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-01 00:40:20.926172 | orchestrator | Sunday 01 March 2026 00:40:19 +0000 (0:00:01.621) 0:00:13.354 ********** 2026-03-01 00:40:20.926191 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:20.926210 | orchestrator | 2026-03-01 00:40:20.926229 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:40:20.926303 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:40:20.926319 | orchestrator | 2026-03-01 00:40:20.926330 | orchestrator | 2026-03-01 00:40:20.926341 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:40:20.926352 | orchestrator | Sunday 01 March 2026 00:40:20 +0000 (0:00:00.842) 0:00:14.197 ********** 2026-03-01 00:40:20.926364 | orchestrator | =============================================================================== 2026-03-01 00:40:20.926375 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.86s 2026-03-01 00:40:20.926386 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.62s 2026-03-01 00:40:20.926397 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.16s 2026-03-01 00:40:20.926408 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2026-03-01 00:40:20.926419 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2026-03-01 00:40:20.926430 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2026-03-01 00:40:20.926441 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.65s 2026-03-01 00:40:20.926452 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-03-01 00:40:20.926463 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-01 00:40:20.926479 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-01 00:40:20.926491 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-03-01 00:40:21.134251 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-01 00:40:21.171789 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-01 00:40:21.171866 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-01 00:40:21.249148 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 180 0 --:--:-- --:--:-- --:--:-- 181 2026-03-01 00:40:21.260251 | orchestrator | + osism apply --environment custom workarounds 2026-03-01 00:40:23.030273 | orchestrator | 2026-03-01 00:40:23 | INFO  | Trying to run play workarounds in environment custom 2026-03-01 00:40:33.114763 | orchestrator | 2026-03-01 00:40:33 | INFO  | Prepare task for execution of workarounds. 2026-03-01 00:40:33.182305 | orchestrator | 2026-03-01 00:40:33 | INFO  | Task 0d324485-2efb-413a-ad8a-f94b5c4b94b8 (workarounds) was prepared for execution. 2026-03-01 00:40:33.182398 | orchestrator | 2026-03-01 00:40:33 | INFO  | It takes a moment until task 0d324485-2efb-413a-ad8a-f94b5c4b94b8 (workarounds) has been started and output is visible here. 2026-03-01 00:40:57.021375 | orchestrator | 2026-03-01 00:40:57.181402 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:40:57.181503 | orchestrator | 2026-03-01 00:40:57.181520 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-01 00:40:57.181533 | orchestrator | Sunday 01 March 2026 00:40:36 +0000 (0:00:00.132) 0:00:00.132 ********** 2026-03-01 00:40:57.181544 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181556 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181567 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181577 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181637 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181651 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181663 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-01 00:40:57.181754 | orchestrator | 2026-03-01 00:40:57.181768 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-01 00:40:57.181779 | orchestrator | 2026-03-01 00:40:57.181790 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-01 00:40:57.181801 | orchestrator | Sunday 01 March 2026 00:40:37 +0000 (0:00:00.687) 0:00:00.820 ********** 2026-03-01 00:40:57.181813 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:57.181825 | orchestrator | 2026-03-01 00:40:57.181836 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-01 00:40:57.181847 | orchestrator | 2026-03-01 00:40:57.181858 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-01 00:40:57.181870 | orchestrator | Sunday 01 March 2026 00:40:39 +0000 (0:00:02.132) 0:00:02.952 ********** 2026-03-01 00:40:57.181881 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:40:57.181892 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:40:57.181902 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:40:57.181913 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:40:57.181924 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:40:57.181935 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:40:57.181945 | orchestrator | 2026-03-01 00:40:57.181957 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-01 00:40:57.181968 | orchestrator | 2026-03-01 00:40:57.181978 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-01 00:40:57.181989 | orchestrator | Sunday 01 March 2026 00:40:41 +0000 (0:00:01.815) 0:00:04.767 ********** 2026-03-01 00:40:57.182001 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-01 00:40:57.182051 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-01 00:40:57.182066 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-01 00:40:57.182082 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-01 00:40:57.182101 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-01 00:40:57.182119 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-01 00:40:57.182137 | orchestrator | 2026-03-01 00:40:57.182154 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-01 00:40:57.182289 | orchestrator | Sunday 01 March 2026 00:40:43 +0000 (0:00:01.393) 0:00:06.161 ********** 2026-03-01 00:40:57.182320 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:40:57.182337 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:40:57.182348 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:40:57.182359 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:40:57.182370 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:40:57.182381 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:40:57.182391 | orchestrator | 2026-03-01 00:40:57.182403 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-01 00:40:57.182414 | orchestrator | Sunday 01 March 2026 00:40:46 +0000 (0:00:03.813) 0:00:09.975 ********** 2026-03-01 00:40:57.182425 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:40:57.182435 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:40:57.182477 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:40:57.182496 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:40:57.182512 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:40:57.182531 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:40:57.182550 | orchestrator | 2026-03-01 00:40:57.182626 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-01 00:40:57.182646 | orchestrator | 2026-03-01 00:40:57.182662 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-01 00:40:57.182682 | orchestrator | Sunday 01 March 2026 00:40:47 +0000 (0:00:00.670) 0:00:10.646 ********** 2026-03-01 00:40:57.182721 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:40:57.182738 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:40:57.182750 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:40:57.182760 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:40:57.182771 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:40:57.182782 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:40:57.182793 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:57.182803 | orchestrator | 2026-03-01 00:40:57.182815 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-01 00:40:57.182826 | orchestrator | Sunday 01 March 2026 00:40:49 +0000 (0:00:01.544) 0:00:12.191 ********** 2026-03-01 00:40:57.182837 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:40:57.182847 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:40:57.182858 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:40:57.182869 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:40:57.182880 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:40:57.182890 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:40:57.182930 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:57.182942 | orchestrator | 2026-03-01 00:40:57.182953 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-01 00:40:57.182964 | orchestrator | Sunday 01 March 2026 00:40:50 +0000 (0:00:01.426) 0:00:13.617 ********** 2026-03-01 00:40:57.182975 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:40:57.182986 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:40:57.182997 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:40:57.183008 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:40:57.183019 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:40:57.183029 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:40:57.183040 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:57.183051 | orchestrator | 2026-03-01 00:40:57.183062 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-01 00:40:57.183073 | orchestrator | Sunday 01 March 2026 00:40:51 +0000 (0:00:01.506) 0:00:15.123 ********** 2026-03-01 00:40:57.183084 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:40:57.183095 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:40:57.183106 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:40:57.183117 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:40:57.183128 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:40:57.183138 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:40:57.183149 | orchestrator | changed: [testbed-manager] 2026-03-01 00:40:57.183160 | orchestrator | 2026-03-01 00:40:57.183171 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-01 00:40:57.183182 | orchestrator | Sunday 01 March 2026 00:40:53 +0000 (0:00:01.742) 0:00:16.866 ********** 2026-03-01 00:40:57.183193 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:40:57.183203 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:40:57.183214 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:40:57.183225 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:40:57.183235 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:40:57.183246 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:40:57.183257 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:40:57.183267 | orchestrator | 2026-03-01 00:40:57.183278 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-01 00:40:57.183289 | orchestrator | 2026-03-01 00:40:57.183300 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-01 00:40:57.183311 | orchestrator | Sunday 01 March 2026 00:40:54 +0000 (0:00:00.568) 0:00:17.435 ********** 2026-03-01 00:40:57.183322 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:40:57.183332 | orchestrator | ok: [testbed-manager] 2026-03-01 00:40:57.183343 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:40:57.183354 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:40:57.183365 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:40:57.183376 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:40:57.183395 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:40:57.183405 | orchestrator | 2026-03-01 00:40:57.183417 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:40:57.183428 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:40:57.183441 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:40:57.183452 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:40:57.183463 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:40:57.183474 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:40:57.183485 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:40:57.183496 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:40:57.183507 | orchestrator | 2026-03-01 00:40:57.183518 | orchestrator | 2026-03-01 00:40:57.183529 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:40:57.183547 | orchestrator | Sunday 01 March 2026 00:40:57 +0000 (0:00:02.704) 0:00:20.139 ********** 2026-03-01 00:40:57.183558 | orchestrator | =============================================================================== 2026-03-01 00:40:57.183569 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.81s 2026-03-01 00:40:57.183580 | orchestrator | Install python3-docker -------------------------------------------------- 2.70s 2026-03-01 00:40:57.183611 | orchestrator | Apply netplan configuration --------------------------------------------- 2.13s 2026-03-01 00:40:57.183622 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2026-03-01 00:40:57.183633 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2026-03-01 00:40:57.183644 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.54s 2026-03-01 00:40:57.183655 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2026-03-01 00:40:57.183665 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.43s 2026-03-01 00:40:57.183676 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.39s 2026-03-01 00:40:57.183687 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.69s 2026-03-01 00:40:57.183698 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2026-03-01 00:40:57.183717 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.57s 2026-03-01 00:40:57.435039 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-01 00:41:09.312110 | orchestrator | 2026-03-01 00:41:09 | INFO  | Prepare task for execution of reboot. 2026-03-01 00:41:09.376803 | orchestrator | 2026-03-01 00:41:09 | INFO  | Task a53131e6-389b-4f64-a1f1-4a013479eeca (reboot) was prepared for execution. 2026-03-01 00:41:09.376914 | orchestrator | 2026-03-01 00:41:09 | INFO  | It takes a moment until task a53131e6-389b-4f64-a1f1-4a013479eeca (reboot) has been started and output is visible here. 2026-03-01 00:41:18.617204 | orchestrator | 2026-03-01 00:41:18.617349 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-01 00:41:18.617370 | orchestrator | 2026-03-01 00:41:18.617383 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-01 00:41:18.617422 | orchestrator | Sunday 01 March 2026 00:41:13 +0000 (0:00:00.149) 0:00:00.149 ********** 2026-03-01 00:41:18.617435 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:41:18.617447 | orchestrator | 2026-03-01 00:41:18.617458 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-01 00:41:18.617470 | orchestrator | Sunday 01 March 2026 00:41:13 +0000 (0:00:00.081) 0:00:00.230 ********** 2026-03-01 00:41:18.617481 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:41:18.617492 | orchestrator | 2026-03-01 00:41:18.617503 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-01 00:41:18.617514 | orchestrator | Sunday 01 March 2026 00:41:14 +0000 (0:00:00.914) 0:00:01.144 ********** 2026-03-01 00:41:18.617525 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:41:18.617536 | orchestrator | 2026-03-01 00:41:18.617547 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-01 00:41:18.617558 | orchestrator | 2026-03-01 00:41:18.617621 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-01 00:41:18.617633 | orchestrator | Sunday 01 March 2026 00:41:14 +0000 (0:00:00.102) 0:00:01.247 ********** 2026-03-01 00:41:18.617645 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:41:18.617664 | orchestrator | 2026-03-01 00:41:18.617683 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-01 00:41:18.617700 | orchestrator | Sunday 01 March 2026 00:41:14 +0000 (0:00:00.091) 0:00:01.338 ********** 2026-03-01 00:41:18.617718 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:41:18.617735 | orchestrator | 2026-03-01 00:41:18.617751 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-01 00:41:18.617767 | orchestrator | Sunday 01 March 2026 00:41:14 +0000 (0:00:00.606) 0:00:01.945 ********** 2026-03-01 00:41:18.617785 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:41:18.617802 | orchestrator | 2026-03-01 00:41:18.617820 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-01 00:41:18.617838 | orchestrator | 2026-03-01 00:41:18.617858 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-01 00:41:18.617877 | orchestrator | Sunday 01 March 2026 00:41:15 +0000 (0:00:00.099) 0:00:02.044 ********** 2026-03-01 00:41:18.617895 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:41:18.617914 | orchestrator | 2026-03-01 00:41:18.617926 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-01 00:41:18.617937 | orchestrator | Sunday 01 March 2026 00:41:15 +0000 (0:00:00.168) 0:00:02.212 ********** 2026-03-01 00:41:18.617948 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:41:18.617959 | orchestrator | 2026-03-01 00:41:18.617970 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-01 00:41:18.617981 | orchestrator | Sunday 01 March 2026 00:41:15 +0000 (0:00:00.674) 0:00:02.886 ********** 2026-03-01 00:41:18.617992 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:41:18.618003 | orchestrator | 2026-03-01 00:41:18.618014 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-01 00:41:18.618083 | orchestrator | 2026-03-01 00:41:18.618095 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-01 00:41:18.618106 | orchestrator | Sunday 01 March 2026 00:41:15 +0000 (0:00:00.095) 0:00:02.982 ********** 2026-03-01 00:41:18.618117 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:41:18.618128 | orchestrator | 2026-03-01 00:41:18.618175 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-01 00:41:18.618187 | orchestrator | Sunday 01 March 2026 00:41:16 +0000 (0:00:00.088) 0:00:03.071 ********** 2026-03-01 00:41:18.618213 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:41:18.618225 | orchestrator | 2026-03-01 00:41:18.618236 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-01 00:41:18.618247 | orchestrator | Sunday 01 March 2026 00:41:16 +0000 (0:00:00.643) 0:00:03.714 ********** 2026-03-01 00:41:18.618258 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:41:18.618280 | orchestrator | 2026-03-01 00:41:18.618291 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-01 00:41:18.618320 | orchestrator | 2026-03-01 00:41:18.618333 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-01 00:41:18.618355 | orchestrator | Sunday 01 March 2026 00:41:16 +0000 (0:00:00.107) 0:00:03.822 ********** 2026-03-01 00:41:18.618367 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:41:18.618378 | orchestrator | 2026-03-01 00:41:18.618389 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-01 00:41:18.618400 | orchestrator | Sunday 01 March 2026 00:41:16 +0000 (0:00:00.087) 0:00:03.909 ********** 2026-03-01 00:41:18.618411 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:41:18.618422 | orchestrator | 2026-03-01 00:41:18.618433 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-01 00:41:18.618444 | orchestrator | Sunday 01 March 2026 00:41:17 +0000 (0:00:00.644) 0:00:04.554 ********** 2026-03-01 00:41:18.618454 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:41:18.618465 | orchestrator | 2026-03-01 00:41:18.618476 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-01 00:41:18.618487 | orchestrator | 2026-03-01 00:41:18.618498 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-01 00:41:18.618509 | orchestrator | Sunday 01 March 2026 00:41:17 +0000 (0:00:00.115) 0:00:04.670 ********** 2026-03-01 00:41:18.618520 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:41:18.618531 | orchestrator | 2026-03-01 00:41:18.618542 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-01 00:41:18.618553 | orchestrator | Sunday 01 March 2026 00:41:17 +0000 (0:00:00.097) 0:00:04.768 ********** 2026-03-01 00:41:18.618587 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:41:18.618607 | orchestrator | 2026-03-01 00:41:18.618627 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-01 00:41:18.618646 | orchestrator | Sunday 01 March 2026 00:41:18 +0000 (0:00:00.642) 0:00:05.410 ********** 2026-03-01 00:41:18.618690 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:41:18.618703 | orchestrator | 2026-03-01 00:41:18.618714 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:41:18.618726 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:41:18.618739 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:41:18.618750 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:41:18.618761 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:41:18.618772 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:41:18.618783 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:41:18.618793 | orchestrator | 2026-03-01 00:41:18.618804 | orchestrator | 2026-03-01 00:41:18.618815 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:41:18.618826 | orchestrator | Sunday 01 March 2026 00:41:18 +0000 (0:00:00.030) 0:00:05.440 ********** 2026-03-01 00:41:18.618837 | orchestrator | =============================================================================== 2026-03-01 00:41:18.618848 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.13s 2026-03-01 00:41:18.618859 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2026-03-01 00:41:18.618870 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-03-01 00:41:18.828839 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-01 00:41:30.670167 | orchestrator | 2026-03-01 00:41:30 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-01 00:41:30.750715 | orchestrator | 2026-03-01 00:41:30 | INFO  | Task c9e1f5fc-f2c5-487d-91cd-d868836288f0 (wait-for-connection) was prepared for execution. 2026-03-01 00:41:30.750787 | orchestrator | 2026-03-01 00:41:30 | INFO  | It takes a moment until task c9e1f5fc-f2c5-487d-91cd-d868836288f0 (wait-for-connection) has been started and output is visible here. 2026-03-01 00:41:46.299657 | orchestrator | 2026-03-01 00:41:46.299783 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-01 00:41:46.299803 | orchestrator | 2026-03-01 00:41:46.299816 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-01 00:41:46.299827 | orchestrator | Sunday 01 March 2026 00:41:34 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-03-01 00:41:46.299839 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:41:46.299851 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:41:46.299862 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:41:46.299873 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:41:46.299884 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:41:46.299895 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:41:46.299906 | orchestrator | 2026-03-01 00:41:46.299935 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:41:46.299947 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:41:46.299960 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:41:46.299972 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:41:46.299983 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:41:46.299994 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:41:46.300005 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:41:46.300016 | orchestrator | 2026-03-01 00:41:46.300027 | orchestrator | 2026-03-01 00:41:46.300038 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:41:46.300049 | orchestrator | Sunday 01 March 2026 00:41:46 +0000 (0:00:11.474) 0:00:11.692 ********** 2026-03-01 00:41:46.300060 | orchestrator | =============================================================================== 2026-03-01 00:41:46.300071 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.47s 2026-03-01 00:41:46.504082 | orchestrator | + osism apply hddtemp 2026-03-01 00:41:58.388004 | orchestrator | 2026-03-01 00:41:58 | INFO  | Prepare task for execution of hddtemp. 2026-03-01 00:41:58.446292 | orchestrator | 2026-03-01 00:41:58 | INFO  | Task 6e6d6a64-f464-457d-86e2-736d6e420ed1 (hddtemp) was prepared for execution. 2026-03-01 00:41:58.446381 | orchestrator | 2026-03-01 00:41:58 | INFO  | It takes a moment until task 6e6d6a64-f464-457d-86e2-736d6e420ed1 (hddtemp) has been started and output is visible here. 2026-03-01 00:42:26.006263 | orchestrator | 2026-03-01 00:42:26.006395 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-01 00:42:26.006423 | orchestrator | 2026-03-01 00:42:26.006442 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-01 00:42:26.006459 | orchestrator | Sunday 01 March 2026 00:42:02 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-03-01 00:42:26.006476 | orchestrator | ok: [testbed-manager] 2026-03-01 00:42:26.006572 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:42:26.006585 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:42:26.006595 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:42:26.006604 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:42:26.006615 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:42:26.006625 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:42:26.006640 | orchestrator | 2026-03-01 00:42:26.006656 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-01 00:42:26.006672 | orchestrator | Sunday 01 March 2026 00:42:03 +0000 (0:00:00.607) 0:00:00.858 ********** 2026-03-01 00:42:26.006691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:42:26.006711 | orchestrator | 2026-03-01 00:42:26.006728 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-01 00:42:26.006739 | orchestrator | Sunday 01 March 2026 00:42:04 +0000 (0:00:01.044) 0:00:01.902 ********** 2026-03-01 00:42:26.006749 | orchestrator | ok: [testbed-manager] 2026-03-01 00:42:26.006758 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:42:26.006768 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:42:26.006779 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:42:26.006790 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:42:26.006803 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:42:26.006820 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:42:26.006835 | orchestrator | 2026-03-01 00:42:26.006852 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-01 00:42:26.006870 | orchestrator | Sunday 01 March 2026 00:42:05 +0000 (0:00:01.891) 0:00:03.794 ********** 2026-03-01 00:42:26.006888 | orchestrator | changed: [testbed-manager] 2026-03-01 00:42:26.006905 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:42:26.006923 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:42:26.006940 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:42:26.006956 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:42:26.006971 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:42:26.006987 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:42:26.007003 | orchestrator | 2026-03-01 00:42:26.007018 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-01 00:42:26.007034 | orchestrator | Sunday 01 March 2026 00:42:07 +0000 (0:00:01.049) 0:00:04.844 ********** 2026-03-01 00:42:26.007051 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:42:26.007066 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:42:26.007083 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:42:26.007099 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:42:26.007115 | orchestrator | ok: [testbed-manager] 2026-03-01 00:42:26.007131 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:42:26.007148 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:42:26.007164 | orchestrator | 2026-03-01 00:42:26.007179 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-01 00:42:26.007196 | orchestrator | Sunday 01 March 2026 00:42:08 +0000 (0:00:01.085) 0:00:05.929 ********** 2026-03-01 00:42:26.007212 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:42:26.007228 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:42:26.007244 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:42:26.007259 | orchestrator | changed: [testbed-manager] 2026-03-01 00:42:26.007288 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:42:26.007302 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:42:26.007317 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:42:26.007332 | orchestrator | 2026-03-01 00:42:26.007347 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-01 00:42:26.007362 | orchestrator | Sunday 01 March 2026 00:42:08 +0000 (0:00:00.765) 0:00:06.694 ********** 2026-03-01 00:42:26.007377 | orchestrator | changed: [testbed-manager] 2026-03-01 00:42:26.007394 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:42:26.007424 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:42:26.007441 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:42:26.007458 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:42:26.007474 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:42:26.007490 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:42:26.007536 | orchestrator | 2026-03-01 00:42:26.007553 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-01 00:42:26.007569 | orchestrator | Sunday 01 March 2026 00:42:22 +0000 (0:00:13.809) 0:00:20.504 ********** 2026-03-01 00:42:26.007587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:42:26.007603 | orchestrator | 2026-03-01 00:42:26.007620 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-01 00:42:26.007637 | orchestrator | Sunday 01 March 2026 00:42:23 +0000 (0:00:01.077) 0:00:21.581 ********** 2026-03-01 00:42:26.007653 | orchestrator | changed: [testbed-manager] 2026-03-01 00:42:26.007670 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:42:26.007686 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:42:26.007701 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:42:26.007717 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:42:26.007734 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:42:26.007750 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:42:26.007767 | orchestrator | 2026-03-01 00:42:26.007783 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:42:26.007801 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:42:26.007845 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:42:26.007864 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:42:26.007882 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:42:26.007898 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:42:26.007914 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:42:26.007930 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:42:26.007947 | orchestrator | 2026-03-01 00:42:26.007963 | orchestrator | 2026-03-01 00:42:26.007980 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:42:26.007996 | orchestrator | Sunday 01 March 2026 00:42:25 +0000 (0:00:01.943) 0:00:23.525 ********** 2026-03-01 00:42:26.008011 | orchestrator | =============================================================================== 2026-03-01 00:42:26.008028 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.81s 2026-03-01 00:42:26.008045 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2026-03-01 00:42:26.008062 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.89s 2026-03-01 00:42:26.008078 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2026-03-01 00:42:26.008094 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.08s 2026-03-01 00:42:26.008110 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.05s 2026-03-01 00:42:26.008137 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.04s 2026-03-01 00:42:26.008152 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.77s 2026-03-01 00:42:26.008169 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-03-01 00:42:26.311421 | orchestrator | ++ semver latest 7.1.1 2026-03-01 00:42:26.367358 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-01 00:42:26.367467 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-01 00:42:26.367489 | orchestrator | + sudo systemctl restart manager.service 2026-03-01 00:42:43.077292 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-01 00:42:43.077394 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-01 00:42:43.077409 | orchestrator | + local max_attempts=60 2026-03-01 00:42:43.077421 | orchestrator | + local name=ceph-ansible 2026-03-01 00:42:43.077432 | orchestrator | + local attempt_num=1 2026-03-01 00:42:43.077442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:42:43.106681 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:42:43.106804 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:42:43.106829 | orchestrator | + sleep 5 2026-03-01 00:42:48.109869 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:42:48.138590 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:42:48.138681 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:42:48.138714 | orchestrator | + sleep 5 2026-03-01 00:42:53.141980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:42:53.180641 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:42:53.180747 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:42:53.180767 | orchestrator | + sleep 5 2026-03-01 00:42:58.184874 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:42:58.220864 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:42:58.220976 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:42:58.220994 | orchestrator | + sleep 5 2026-03-01 00:43:03.224916 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:03.260390 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:03.260511 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:03.260529 | orchestrator | + sleep 5 2026-03-01 00:43:08.265287 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:08.310286 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:08.310412 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:08.310438 | orchestrator | + sleep 5 2026-03-01 00:43:13.314976 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:13.347383 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:13.347483 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:13.347499 | orchestrator | + sleep 5 2026-03-01 00:43:18.351326 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:18.388092 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:18.388154 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:18.388159 | orchestrator | + sleep 5 2026-03-01 00:43:23.390617 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:23.421473 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:23.421565 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:23.421576 | orchestrator | + sleep 5 2026-03-01 00:43:28.424812 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:28.460990 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:28.461090 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:28.461106 | orchestrator | + sleep 5 2026-03-01 00:43:33.465106 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:33.499164 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:33.499278 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:33.499301 | orchestrator | + sleep 5 2026-03-01 00:43:38.503299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:38.541876 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:38.541983 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:38.542092 | orchestrator | + sleep 5 2026-03-01 00:43:43.545611 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:43.583171 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:43.583268 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-01 00:43:43.583285 | orchestrator | + sleep 5 2026-03-01 00:43:48.587463 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-01 00:43:48.622567 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:48.622683 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-01 00:43:48.622702 | orchestrator | + local max_attempts=60 2026-03-01 00:43:48.622716 | orchestrator | + local name=kolla-ansible 2026-03-01 00:43:48.622728 | orchestrator | + local attempt_num=1 2026-03-01 00:43:48.623493 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-01 00:43:48.653269 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:48.653366 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-01 00:43:48.653377 | orchestrator | + local max_attempts=60 2026-03-01 00:43:48.653385 | orchestrator | + local name=osism-ansible 2026-03-01 00:43:48.653391 | orchestrator | + local attempt_num=1 2026-03-01 00:43:48.654106 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-01 00:43:48.681699 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-01 00:43:48.681832 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-01 00:43:48.681850 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-01 00:43:48.828046 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-01 00:43:49.000623 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-01 00:43:49.139893 | orchestrator | ARA in osism-ansible already disabled. 2026-03-01 00:43:49.274837 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-01 00:43:49.275071 | orchestrator | + osism apply gather-facts 2026-03-01 00:44:01.155391 | orchestrator | 2026-03-01 00:44:01 | INFO  | Prepare task for execution of gather-facts. 2026-03-01 00:44:01.239473 | orchestrator | 2026-03-01 00:44:01 | INFO  | Task de957725-05dd-43cf-adc1-e3891049d80f (gather-facts) was prepared for execution. 2026-03-01 00:44:01.240822 | orchestrator | 2026-03-01 00:44:01 | INFO  | It takes a moment until task de957725-05dd-43cf-adc1-e3891049d80f (gather-facts) has been started and output is visible here. 2026-03-01 00:44:14.166261 | orchestrator | 2026-03-01 00:44:14.166359 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-01 00:44:14.166372 | orchestrator | 2026-03-01 00:44:14.166381 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-01 00:44:14.166389 | orchestrator | Sunday 01 March 2026 00:44:05 +0000 (0:00:00.215) 0:00:00.215 ********** 2026-03-01 00:44:14.166398 | orchestrator | ok: [testbed-manager] 2026-03-01 00:44:14.166407 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:44:14.166416 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:44:14.166424 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:44:14.166432 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:44:14.166440 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:44:14.166448 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:44:14.166456 | orchestrator | 2026-03-01 00:44:14.166464 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-01 00:44:14.166472 | orchestrator | 2026-03-01 00:44:14.166480 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-01 00:44:14.166489 | orchestrator | Sunday 01 March 2026 00:44:13 +0000 (0:00:08.024) 0:00:08.240 ********** 2026-03-01 00:44:14.166497 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:44:14.166506 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:44:14.166514 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:44:14.166522 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:44:14.166530 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:44:14.166537 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:44:14.166545 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:44:14.166553 | orchestrator | 2026-03-01 00:44:14.166562 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:44:14.166570 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166603 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166611 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166633 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166642 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166650 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166659 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 00:44:14.166667 | orchestrator | 2026-03-01 00:44:14.166675 | orchestrator | 2026-03-01 00:44:14.166683 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:44:14.166691 | orchestrator | Sunday 01 March 2026 00:44:13 +0000 (0:00:00.453) 0:00:08.693 ********** 2026-03-01 00:44:14.166699 | orchestrator | =============================================================================== 2026-03-01 00:44:14.166707 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.02s 2026-03-01 00:44:14.166715 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-01 00:44:14.389781 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-01 00:44:14.403797 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-01 00:44:14.412102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-01 00:44:14.430309 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-01 00:44:14.449122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-01 00:44:14.459191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-01 00:44:14.469218 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-01 00:44:14.487315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-01 00:44:14.498457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-01 00:44:14.508714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-01 00:44:14.519149 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-01 00:44:14.528465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-01 00:44:14.538995 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-01 00:44:14.555250 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-01 00:44:14.575117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-01 00:44:14.588726 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-01 00:44:14.604804 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-01 00:44:14.615688 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-01 00:44:14.626853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-01 00:44:14.642673 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-01 00:44:14.656291 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-01 00:44:14.668769 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-01 00:44:14.684860 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-01 00:44:14.704706 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-01 00:44:15.162430 | orchestrator | ok: Runtime: 0:23:52.882401 2026-03-01 00:44:15.262783 | 2026-03-01 00:44:15.262971 | TASK [Deploy services] 2026-03-01 00:44:15.798239 | orchestrator | skipping: Conditional result was False 2026-03-01 00:44:15.816072 | 2026-03-01 00:44:15.816243 | TASK [Deploy in a nutshell] 2026-03-01 00:44:16.544637 | orchestrator | 2026-03-01 00:44:16.544878 | orchestrator | # PULL IMAGES 2026-03-01 00:44:16.544917 | orchestrator | 2026-03-01 00:44:16.544940 | orchestrator | + set -e 2026-03-01 00:44:16.544959 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-01 00:44:16.544980 | orchestrator | ++ export INTERACTIVE=false 2026-03-01 00:44:16.544993 | orchestrator | ++ INTERACTIVE=false 2026-03-01 00:44:16.545036 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-01 00:44:16.545058 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-01 00:44:16.545073 | orchestrator | + source /opt/manager-vars.sh 2026-03-01 00:44:16.545084 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-01 00:44:16.545103 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-01 00:44:16.545114 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-01 00:44:16.545133 | orchestrator | ++ CEPH_VERSION=reef 2026-03-01 00:44:16.545144 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-01 00:44:16.545163 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-01 00:44:16.545174 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-01 00:44:16.545188 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-01 00:44:16.545200 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-01 00:44:16.545212 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-01 00:44:16.545223 | orchestrator | ++ export ARA=false 2026-03-01 00:44:16.545235 | orchestrator | ++ ARA=false 2026-03-01 00:44:16.545245 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-01 00:44:16.545257 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-01 00:44:16.545267 | orchestrator | ++ export TEMPEST=true 2026-03-01 00:44:16.545278 | orchestrator | ++ TEMPEST=true 2026-03-01 00:44:16.545288 | orchestrator | ++ export IS_ZUUL=true 2026-03-01 00:44:16.545299 | orchestrator | ++ IS_ZUUL=true 2026-03-01 00:44:16.545310 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:44:16.545321 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.81 2026-03-01 00:44:16.545331 | orchestrator | ++ export EXTERNAL_API=false 2026-03-01 00:44:16.545342 | orchestrator | ++ EXTERNAL_API=false 2026-03-01 00:44:16.545353 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-01 00:44:16.545364 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-01 00:44:16.545375 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-01 00:44:16.545385 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-01 00:44:16.545397 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-01 00:44:16.545408 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-01 00:44:16.545419 | orchestrator | + echo 2026-03-01 00:44:16.545429 | orchestrator | + echo '# PULL IMAGES' 2026-03-01 00:44:16.545440 | orchestrator | + echo 2026-03-01 00:44:16.545465 | orchestrator | ++ semver latest 7.0.0 2026-03-01 00:44:16.592311 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-01 00:44:16.592413 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-01 00:44:16.592423 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-01 00:44:18.323278 | orchestrator | 2026-03-01 00:44:18 | INFO  | Trying to run play pull-images in environment custom 2026-03-01 00:44:28.384459 | orchestrator | 2026-03-01 00:44:28 | INFO  | Prepare task for execution of pull-images. 2026-03-01 00:44:28.458932 | orchestrator | 2026-03-01 00:44:28 | INFO  | Task 5248a885-9ddd-43b5-8291-29dfe6058a72 (pull-images) was prepared for execution. 2026-03-01 00:44:28.459066 | orchestrator | 2026-03-01 00:44:28 | INFO  | Task 5248a885-9ddd-43b5-8291-29dfe6058a72 is running in background. No more output. Check ARA for logs. 2026-03-01 00:44:30.536290 | orchestrator | 2026-03-01 00:44:30 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-01 00:44:40.682490 | orchestrator | 2026-03-01 00:44:40 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-01 00:44:40.747388 | orchestrator | 2026-03-01 00:44:40 | INFO  | Task 267a1985-8b9e-479a-9c40-65685c05ea4a (wipe-partitions) was prepared for execution. 2026-03-01 00:44:40.747536 | orchestrator | 2026-03-01 00:44:40 | INFO  | It takes a moment until task 267a1985-8b9e-479a-9c40-65685c05ea4a (wipe-partitions) has been started and output is visible here. 2026-03-01 00:44:52.831646 | orchestrator | 2026-03-01 00:44:52.831836 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-01 00:44:52.831857 | orchestrator | 2026-03-01 00:44:52.831867 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-01 00:44:52.831881 | orchestrator | Sunday 01 March 2026 00:44:44 +0000 (0:00:00.120) 0:00:00.120 ********** 2026-03-01 00:44:52.831913 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:44:52.832010 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:44:52.832021 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:44:52.832029 | orchestrator | 2026-03-01 00:44:52.832038 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-01 00:44:52.832046 | orchestrator | Sunday 01 March 2026 00:44:45 +0000 (0:00:00.592) 0:00:00.713 ********** 2026-03-01 00:44:52.832059 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:44:52.832069 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:44:52.832078 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:44:52.832086 | orchestrator | 2026-03-01 00:44:52.832096 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-01 00:44:52.832105 | orchestrator | Sunday 01 March 2026 00:44:45 +0000 (0:00:00.332) 0:00:01.045 ********** 2026-03-01 00:44:52.832111 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:44:52.832117 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:44:52.832123 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:44:52.832128 | orchestrator | 2026-03-01 00:44:52.832134 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-01 00:44:52.832139 | orchestrator | Sunday 01 March 2026 00:44:46 +0000 (0:00:00.635) 0:00:01.680 ********** 2026-03-01 00:44:52.832144 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:44:52.832150 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:44:52.832155 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:44:52.832160 | orchestrator | 2026-03-01 00:44:52.832166 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-01 00:44:52.832172 | orchestrator | Sunday 01 March 2026 00:44:46 +0000 (0:00:00.216) 0:00:01.896 ********** 2026-03-01 00:44:52.832177 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-01 00:44:52.832186 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-01 00:44:52.832192 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-01 00:44:52.832199 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-01 00:44:52.832205 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-01 00:44:52.832211 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-01 00:44:52.832218 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-01 00:44:52.832224 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-01 00:44:52.832230 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-01 00:44:52.832237 | orchestrator | 2026-03-01 00:44:52.832244 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-01 00:44:52.832251 | orchestrator | Sunday 01 March 2026 00:44:47 +0000 (0:00:01.289) 0:00:03.186 ********** 2026-03-01 00:44:52.832257 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-01 00:44:52.832264 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-01 00:44:52.832270 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-01 00:44:52.832276 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-01 00:44:52.832282 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-01 00:44:52.832289 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-01 00:44:52.832295 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-01 00:44:52.832302 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-01 00:44:52.832307 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-01 00:44:52.832313 | orchestrator | 2026-03-01 00:44:52.832318 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-01 00:44:52.832324 | orchestrator | Sunday 01 March 2026 00:44:49 +0000 (0:00:01.532) 0:00:04.719 ********** 2026-03-01 00:44:52.832329 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-01 00:44:52.832334 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-01 00:44:52.832339 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-01 00:44:52.832350 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-01 00:44:52.832364 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-01 00:44:52.832369 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-01 00:44:52.832374 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-01 00:44:52.832381 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-01 00:44:52.832390 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-01 00:44:52.832398 | orchestrator | 2026-03-01 00:44:52.832412 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-01 00:44:52.832422 | orchestrator | Sunday 01 March 2026 00:44:51 +0000 (0:00:02.057) 0:00:06.776 ********** 2026-03-01 00:44:52.832430 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:44:52.832438 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:44:52.832446 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:44:52.832453 | orchestrator | 2026-03-01 00:44:52.832462 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-01 00:44:52.832471 | orchestrator | Sunday 01 March 2026 00:44:51 +0000 (0:00:00.602) 0:00:07.379 ********** 2026-03-01 00:44:52.832481 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:44:52.832490 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:44:52.832499 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:44:52.832508 | orchestrator | 2026-03-01 00:44:52.832517 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:44:52.832525 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:44:52.832535 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:44:52.832569 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:44:52.832580 | orchestrator | 2026-03-01 00:44:52.832589 | orchestrator | 2026-03-01 00:44:52.832598 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:44:52.832607 | orchestrator | Sunday 01 March 2026 00:44:52 +0000 (0:00:00.638) 0:00:08.017 ********** 2026-03-01 00:44:52.832615 | orchestrator | =============================================================================== 2026-03-01 00:44:52.832623 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.06s 2026-03-01 00:44:52.832631 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.53s 2026-03-01 00:44:52.832639 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-03-01 00:44:52.832646 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-03-01 00:44:52.832654 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2026-03-01 00:44:52.832662 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-03-01 00:44:52.832671 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-03-01 00:44:52.832679 | orchestrator | Remove all rook related logical devices --------------------------------- 0.33s 2026-03-01 00:44:52.832687 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-03-01 00:45:05.011450 | orchestrator | 2026-03-01 00:45:05 | INFO  | Prepare task for execution of facts. 2026-03-01 00:45:05.074710 | orchestrator | 2026-03-01 00:45:05 | INFO  | Task afc6cc40-dc9d-4eac-8f19-8ff1edfd7072 (facts) was prepared for execution. 2026-03-01 00:45:05.074799 | orchestrator | 2026-03-01 00:45:05 | INFO  | It takes a moment until task afc6cc40-dc9d-4eac-8f19-8ff1edfd7072 (facts) has been started and output is visible here. 2026-03-01 00:45:17.353661 | orchestrator | 2026-03-01 00:45:17.353738 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-01 00:45:17.353745 | orchestrator | 2026-03-01 00:45:17.353768 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-01 00:45:17.353772 | orchestrator | Sunday 01 March 2026 00:45:09 +0000 (0:00:00.239) 0:00:00.239 ********** 2026-03-01 00:45:17.353777 | orchestrator | ok: [testbed-manager] 2026-03-01 00:45:17.353782 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:45:17.353786 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:45:17.353790 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:45:17.353794 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:45:17.353797 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:45:17.353801 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:45:17.353805 | orchestrator | 2026-03-01 00:45:17.353822 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-01 00:45:17.353826 | orchestrator | Sunday 01 March 2026 00:45:10 +0000 (0:00:01.019) 0:00:01.259 ********** 2026-03-01 00:45:17.353831 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:45:17.353835 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:45:17.353839 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:45:17.353842 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:45:17.353846 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:17.353850 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:17.353853 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:17.353857 | orchestrator | 2026-03-01 00:45:17.353861 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-01 00:45:17.353865 | orchestrator | 2026-03-01 00:45:17.353868 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-01 00:45:17.353873 | orchestrator | Sunday 01 March 2026 00:45:11 +0000 (0:00:01.082) 0:00:02.342 ********** 2026-03-01 00:45:17.353877 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:45:17.353881 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:45:17.353884 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:45:17.353888 | orchestrator | ok: [testbed-manager] 2026-03-01 00:45:17.353892 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:45:17.353895 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:45:17.353899 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:45:17.353903 | orchestrator | 2026-03-01 00:45:17.353906 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-01 00:45:17.353910 | orchestrator | 2026-03-01 00:45:17.353914 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-01 00:45:17.353918 | orchestrator | Sunday 01 March 2026 00:45:16 +0000 (0:00:05.499) 0:00:07.841 ********** 2026-03-01 00:45:17.353921 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:45:17.353925 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:45:17.353929 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:45:17.353932 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:45:17.353936 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:17.353940 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:17.353943 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:17.353947 | orchestrator | 2026-03-01 00:45:17.353951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:45:17.353955 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.353960 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.353964 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.353967 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.353971 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.353978 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.354111 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:45:17.354117 | orchestrator | 2026-03-01 00:45:17.354121 | orchestrator | 2026-03-01 00:45:17.354125 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:45:17.354129 | orchestrator | Sunday 01 March 2026 00:45:17 +0000 (0:00:00.453) 0:00:08.295 ********** 2026-03-01 00:45:17.354133 | orchestrator | =============================================================================== 2026-03-01 00:45:17.354136 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.50s 2026-03-01 00:45:17.354140 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2026-03-01 00:45:17.354144 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-03-01 00:45:17.354148 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-01 00:45:19.386282 | orchestrator | 2026-03-01 00:45:19 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-01 00:45:19.443121 | orchestrator | 2026-03-01 00:45:19 | INFO  | Task 2447030a-c5e1-4fa1-8441-560a6e2b0ac3 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-01 00:45:19.443241 | orchestrator | 2026-03-01 00:45:19 | INFO  | It takes a moment until task 2447030a-c5e1-4fa1-8441-560a6e2b0ac3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-01 00:45:30.855056 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-01 00:45:30.855129 | orchestrator | 2.16.14 2026-03-01 00:45:30.855144 | orchestrator | 2026-03-01 00:45:30.855163 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-01 00:45:30.855174 | orchestrator | 2026-03-01 00:45:30.855184 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-01 00:45:30.855194 | orchestrator | Sunday 01 March 2026 00:45:23 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-03-01 00:45:30.855206 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-01 00:45:30.855216 | orchestrator | 2026-03-01 00:45:30.855221 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-01 00:45:30.855227 | orchestrator | Sunday 01 March 2026 00:45:23 +0000 (0:00:00.228) 0:00:00.521 ********** 2026-03-01 00:45:30.855233 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:45:30.855238 | orchestrator | 2026-03-01 00:45:30.855244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855253 | orchestrator | Sunday 01 March 2026 00:45:24 +0000 (0:00:00.208) 0:00:00.730 ********** 2026-03-01 00:45:30.855261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-01 00:45:30.855269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-01 00:45:30.855278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-01 00:45:30.855285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-01 00:45:30.855295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-01 00:45:30.855304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-01 00:45:30.855313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-01 00:45:30.855323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-01 00:45:30.855334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-01 00:45:30.855343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-01 00:45:30.855370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-01 00:45:30.855376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-01 00:45:30.855382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-01 00:45:30.855388 | orchestrator | 2026-03-01 00:45:30.855393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855399 | orchestrator | Sunday 01 March 2026 00:45:24 +0000 (0:00:00.455) 0:00:01.185 ********** 2026-03-01 00:45:30.855405 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855411 | orchestrator | 2026-03-01 00:45:30.855416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855422 | orchestrator | Sunday 01 March 2026 00:45:24 +0000 (0:00:00.195) 0:00:01.381 ********** 2026-03-01 00:45:30.855428 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855433 | orchestrator | 2026-03-01 00:45:30.855439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855448 | orchestrator | Sunday 01 March 2026 00:45:24 +0000 (0:00:00.187) 0:00:01.568 ********** 2026-03-01 00:45:30.855453 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855459 | orchestrator | 2026-03-01 00:45:30.855465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855470 | orchestrator | Sunday 01 March 2026 00:45:25 +0000 (0:00:00.202) 0:00:01.771 ********** 2026-03-01 00:45:30.855476 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855482 | orchestrator | 2026-03-01 00:45:30.855488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855493 | orchestrator | Sunday 01 March 2026 00:45:25 +0000 (0:00:00.198) 0:00:01.970 ********** 2026-03-01 00:45:30.855499 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855505 | orchestrator | 2026-03-01 00:45:30.855510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855516 | orchestrator | Sunday 01 March 2026 00:45:25 +0000 (0:00:00.217) 0:00:02.187 ********** 2026-03-01 00:45:30.855522 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855527 | orchestrator | 2026-03-01 00:45:30.855533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855539 | orchestrator | Sunday 01 March 2026 00:45:25 +0000 (0:00:00.196) 0:00:02.384 ********** 2026-03-01 00:45:30.855544 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855550 | orchestrator | 2026-03-01 00:45:30.855556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855561 | orchestrator | Sunday 01 March 2026 00:45:25 +0000 (0:00:00.202) 0:00:02.586 ********** 2026-03-01 00:45:30.855567 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855573 | orchestrator | 2026-03-01 00:45:30.855578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855584 | orchestrator | Sunday 01 March 2026 00:45:26 +0000 (0:00:00.198) 0:00:02.785 ********** 2026-03-01 00:45:30.855590 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6) 2026-03-01 00:45:30.855597 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6) 2026-03-01 00:45:30.855602 | orchestrator | 2026-03-01 00:45:30.855608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855626 | orchestrator | Sunday 01 March 2026 00:45:26 +0000 (0:00:00.403) 0:00:03.188 ********** 2026-03-01 00:45:30.855632 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0) 2026-03-01 00:45:30.855638 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0) 2026-03-01 00:45:30.855644 | orchestrator | 2026-03-01 00:45:30.855650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855661 | orchestrator | Sunday 01 March 2026 00:45:27 +0000 (0:00:00.615) 0:00:03.803 ********** 2026-03-01 00:45:30.855666 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec) 2026-03-01 00:45:30.855672 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec) 2026-03-01 00:45:30.855687 | orchestrator | 2026-03-01 00:45:30.855693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855699 | orchestrator | Sunday 01 March 2026 00:45:27 +0000 (0:00:00.651) 0:00:04.455 ********** 2026-03-01 00:45:30.855705 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17) 2026-03-01 00:45:30.855710 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17) 2026-03-01 00:45:30.855716 | orchestrator | 2026-03-01 00:45:30.855722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:30.855735 | orchestrator | Sunday 01 March 2026 00:45:28 +0000 (0:00:00.892) 0:00:05.347 ********** 2026-03-01 00:45:30.855741 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-01 00:45:30.855747 | orchestrator | 2026-03-01 00:45:30.855752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855758 | orchestrator | Sunday 01 March 2026 00:45:28 +0000 (0:00:00.335) 0:00:05.683 ********** 2026-03-01 00:45:30.855768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-01 00:45:30.855774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-01 00:45:30.855780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-01 00:45:30.855785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-01 00:45:30.855791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-01 00:45:30.855797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-01 00:45:30.855802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-01 00:45:30.855808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-01 00:45:30.855814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-01 00:45:30.855819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-01 00:45:30.855825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-01 00:45:30.855831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-01 00:45:30.855836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-01 00:45:30.855842 | orchestrator | 2026-03-01 00:45:30.855848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855854 | orchestrator | Sunday 01 March 2026 00:45:29 +0000 (0:00:00.380) 0:00:06.063 ********** 2026-03-01 00:45:30.855859 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855865 | orchestrator | 2026-03-01 00:45:30.855871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855876 | orchestrator | Sunday 01 March 2026 00:45:29 +0000 (0:00:00.206) 0:00:06.269 ********** 2026-03-01 00:45:30.855882 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855888 | orchestrator | 2026-03-01 00:45:30.855893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855899 | orchestrator | Sunday 01 March 2026 00:45:29 +0000 (0:00:00.201) 0:00:06.471 ********** 2026-03-01 00:45:30.855905 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855914 | orchestrator | 2026-03-01 00:45:30.855920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855926 | orchestrator | Sunday 01 March 2026 00:45:30 +0000 (0:00:00.237) 0:00:06.708 ********** 2026-03-01 00:45:30.855932 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855937 | orchestrator | 2026-03-01 00:45:30.855943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855949 | orchestrator | Sunday 01 March 2026 00:45:30 +0000 (0:00:00.205) 0:00:06.913 ********** 2026-03-01 00:45:30.855954 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855960 | orchestrator | 2026-03-01 00:45:30.855969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855975 | orchestrator | Sunday 01 March 2026 00:45:30 +0000 (0:00:00.191) 0:00:07.105 ********** 2026-03-01 00:45:30.855981 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.855987 | orchestrator | 2026-03-01 00:45:30.855992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:30.855998 | orchestrator | Sunday 01 March 2026 00:45:30 +0000 (0:00:00.223) 0:00:07.328 ********** 2026-03-01 00:45:30.856004 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:30.856009 | orchestrator | 2026-03-01 00:45:30.856062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:38.475108 | orchestrator | Sunday 01 March 2026 00:45:30 +0000 (0:00:00.202) 0:00:07.531 ********** 2026-03-01 00:45:38.475218 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475229 | orchestrator | 2026-03-01 00:45:38.475236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:38.475242 | orchestrator | Sunday 01 March 2026 00:45:31 +0000 (0:00:00.228) 0:00:07.759 ********** 2026-03-01 00:45:38.475247 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-01 00:45:38.475254 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-01 00:45:38.475286 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-01 00:45:38.475293 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-01 00:45:38.475298 | orchestrator | 2026-03-01 00:45:38.475304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:38.475310 | orchestrator | Sunday 01 March 2026 00:45:32 +0000 (0:00:00.933) 0:00:08.692 ********** 2026-03-01 00:45:38.475316 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475321 | orchestrator | 2026-03-01 00:45:38.475387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:38.475395 | orchestrator | Sunday 01 March 2026 00:45:32 +0000 (0:00:00.190) 0:00:08.883 ********** 2026-03-01 00:45:38.475400 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475406 | orchestrator | 2026-03-01 00:45:38.475411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:38.475416 | orchestrator | Sunday 01 March 2026 00:45:32 +0000 (0:00:00.186) 0:00:09.070 ********** 2026-03-01 00:45:38.475422 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475427 | orchestrator | 2026-03-01 00:45:38.475432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:38.475437 | orchestrator | Sunday 01 March 2026 00:45:32 +0000 (0:00:00.191) 0:00:09.261 ********** 2026-03-01 00:45:38.475442 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475448 | orchestrator | 2026-03-01 00:45:38.475453 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-01 00:45:38.475458 | orchestrator | Sunday 01 March 2026 00:45:32 +0000 (0:00:00.237) 0:00:09.498 ********** 2026-03-01 00:45:38.475463 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-01 00:45:38.475469 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-01 00:45:38.475474 | orchestrator | 2026-03-01 00:45:38.475479 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-01 00:45:38.475484 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.193) 0:00:09.691 ********** 2026-03-01 00:45:38.475509 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475514 | orchestrator | 2026-03-01 00:45:38.475519 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-01 00:45:38.475525 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.134) 0:00:09.826 ********** 2026-03-01 00:45:38.475530 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475535 | orchestrator | 2026-03-01 00:45:38.475542 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-01 00:45:38.475547 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.128) 0:00:09.954 ********** 2026-03-01 00:45:38.475552 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475557 | orchestrator | 2026-03-01 00:45:38.475562 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-01 00:45:38.475567 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.122) 0:00:10.077 ********** 2026-03-01 00:45:38.475572 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:45:38.475578 | orchestrator | 2026-03-01 00:45:38.475583 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-01 00:45:38.475588 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.142) 0:00:10.219 ********** 2026-03-01 00:45:38.475594 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31f22992-0e1a-5ef5-a8b3-14a12910c272'}}) 2026-03-01 00:45:38.475600 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}}) 2026-03-01 00:45:38.475605 | orchestrator | 2026-03-01 00:45:38.475610 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-01 00:45:38.475615 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.150) 0:00:10.370 ********** 2026-03-01 00:45:38.475620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31f22992-0e1a-5ef5-a8b3-14a12910c272'}})  2026-03-01 00:45:38.475637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}})  2026-03-01 00:45:38.475643 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475649 | orchestrator | 2026-03-01 00:45:38.475655 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-01 00:45:38.475661 | orchestrator | Sunday 01 March 2026 00:45:33 +0000 (0:00:00.153) 0:00:10.523 ********** 2026-03-01 00:45:38.475666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31f22992-0e1a-5ef5-a8b3-14a12910c272'}})  2026-03-01 00:45:38.475673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}})  2026-03-01 00:45:38.475678 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475684 | orchestrator | 2026-03-01 00:45:38.475690 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-01 00:45:38.475696 | orchestrator | Sunday 01 March 2026 00:45:34 +0000 (0:00:00.350) 0:00:10.874 ********** 2026-03-01 00:45:38.475701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31f22992-0e1a-5ef5-a8b3-14a12910c272'}})  2026-03-01 00:45:38.475722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}})  2026-03-01 00:45:38.475729 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475734 | orchestrator | 2026-03-01 00:45:38.475740 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-01 00:45:38.475746 | orchestrator | Sunday 01 March 2026 00:45:34 +0000 (0:00:00.165) 0:00:11.039 ********** 2026-03-01 00:45:38.475752 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:45:38.475758 | orchestrator | 2026-03-01 00:45:38.475764 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-01 00:45:38.475770 | orchestrator | Sunday 01 March 2026 00:45:34 +0000 (0:00:00.124) 0:00:11.164 ********** 2026-03-01 00:45:38.475776 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:45:38.475787 | orchestrator | 2026-03-01 00:45:38.475793 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-01 00:45:38.475799 | orchestrator | Sunday 01 March 2026 00:45:34 +0000 (0:00:00.137) 0:00:11.302 ********** 2026-03-01 00:45:38.475805 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475811 | orchestrator | 2026-03-01 00:45:38.475825 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-01 00:45:38.475831 | orchestrator | Sunday 01 March 2026 00:45:34 +0000 (0:00:00.151) 0:00:11.454 ********** 2026-03-01 00:45:38.475837 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475843 | orchestrator | 2026-03-01 00:45:38.475849 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-01 00:45:38.475855 | orchestrator | Sunday 01 March 2026 00:45:34 +0000 (0:00:00.143) 0:00:11.597 ********** 2026-03-01 00:45:38.475861 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475866 | orchestrator | 2026-03-01 00:45:38.475872 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-01 00:45:38.475878 | orchestrator | Sunday 01 March 2026 00:45:35 +0000 (0:00:00.152) 0:00:11.749 ********** 2026-03-01 00:45:38.475884 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 00:45:38.475890 | orchestrator |  "ceph_osd_devices": { 2026-03-01 00:45:38.475896 | orchestrator |  "sdb": { 2026-03-01 00:45:38.475902 | orchestrator |  "osd_lvm_uuid": "31f22992-0e1a-5ef5-a8b3-14a12910c272" 2026-03-01 00:45:38.475909 | orchestrator |  }, 2026-03-01 00:45:38.475915 | orchestrator |  "sdc": { 2026-03-01 00:45:38.475920 | orchestrator |  "osd_lvm_uuid": "71bbeaa0-80e8-52b0-b7ca-02965d05b7d3" 2026-03-01 00:45:38.475926 | orchestrator |  } 2026-03-01 00:45:38.475932 | orchestrator |  } 2026-03-01 00:45:38.475938 | orchestrator | } 2026-03-01 00:45:38.475944 | orchestrator | 2026-03-01 00:45:38.475950 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-01 00:45:38.475956 | orchestrator | Sunday 01 March 2026 00:45:35 +0000 (0:00:00.152) 0:00:11.901 ********** 2026-03-01 00:45:38.475961 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475967 | orchestrator | 2026-03-01 00:45:38.475974 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-01 00:45:38.475980 | orchestrator | Sunday 01 March 2026 00:45:35 +0000 (0:00:00.172) 0:00:12.074 ********** 2026-03-01 00:45:38.475985 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.475991 | orchestrator | 2026-03-01 00:45:38.475997 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-01 00:45:38.476002 | orchestrator | Sunday 01 March 2026 00:45:35 +0000 (0:00:00.138) 0:00:12.213 ********** 2026-03-01 00:45:38.476007 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:45:38.476012 | orchestrator | 2026-03-01 00:45:38.476017 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-01 00:45:38.476030 | orchestrator | Sunday 01 March 2026 00:45:35 +0000 (0:00:00.140) 0:00:12.354 ********** 2026-03-01 00:45:38.476097 | orchestrator | changed: [testbed-node-3] => { 2026-03-01 00:45:38.476109 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-01 00:45:38.476117 | orchestrator |  "ceph_osd_devices": { 2026-03-01 00:45:38.476126 | orchestrator |  "sdb": { 2026-03-01 00:45:38.476135 | orchestrator |  "osd_lvm_uuid": "31f22992-0e1a-5ef5-a8b3-14a12910c272" 2026-03-01 00:45:38.476142 | orchestrator |  }, 2026-03-01 00:45:38.476151 | orchestrator |  "sdc": { 2026-03-01 00:45:38.476158 | orchestrator |  "osd_lvm_uuid": "71bbeaa0-80e8-52b0-b7ca-02965d05b7d3" 2026-03-01 00:45:38.476163 | orchestrator |  } 2026-03-01 00:45:38.476168 | orchestrator |  }, 2026-03-01 00:45:38.476173 | orchestrator |  "lvm_volumes": [ 2026-03-01 00:45:38.476178 | orchestrator |  { 2026-03-01 00:45:38.476183 | orchestrator |  "data": "osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272", 2026-03-01 00:45:38.476188 | orchestrator |  "data_vg": "ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272" 2026-03-01 00:45:38.476198 | orchestrator |  }, 2026-03-01 00:45:38.476203 | orchestrator |  { 2026-03-01 00:45:38.476209 | orchestrator |  "data": "osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3", 2026-03-01 00:45:38.476214 | orchestrator |  "data_vg": "ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3" 2026-03-01 00:45:38.476219 | orchestrator |  } 2026-03-01 00:45:38.476224 | orchestrator |  ] 2026-03-01 00:45:38.476229 | orchestrator |  } 2026-03-01 00:45:38.476234 | orchestrator | } 2026-03-01 00:45:38.476239 | orchestrator | 2026-03-01 00:45:38.476244 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-01 00:45:38.476249 | orchestrator | Sunday 01 March 2026 00:45:36 +0000 (0:00:00.402) 0:00:12.756 ********** 2026-03-01 00:45:38.476254 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-01 00:45:38.476259 | orchestrator | 2026-03-01 00:45:38.476264 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-01 00:45:38.476269 | orchestrator | 2026-03-01 00:45:38.476274 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-01 00:45:38.476279 | orchestrator | Sunday 01 March 2026 00:45:37 +0000 (0:00:01.898) 0:00:14.655 ********** 2026-03-01 00:45:38.476284 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-01 00:45:38.476289 | orchestrator | 2026-03-01 00:45:38.476298 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-01 00:45:38.476303 | orchestrator | Sunday 01 March 2026 00:45:38 +0000 (0:00:00.255) 0:00:14.910 ********** 2026-03-01 00:45:38.476308 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:45:38.476313 | orchestrator | 2026-03-01 00:45:38.476324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.363862 | orchestrator | Sunday 01 March 2026 00:45:38 +0000 (0:00:00.247) 0:00:15.158 ********** 2026-03-01 00:45:46.363942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-01 00:45:46.363949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-01 00:45:46.363955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-01 00:45:46.363960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-01 00:45:46.363966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-01 00:45:46.363971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-01 00:45:46.363976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-01 00:45:46.363985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-01 00:45:46.363990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-01 00:45:46.363997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-01 00:45:46.364001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-01 00:45:46.364007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-01 00:45:46.364013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-01 00:45:46.364018 | orchestrator | 2026-03-01 00:45:46.364025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364031 | orchestrator | Sunday 01 March 2026 00:45:38 +0000 (0:00:00.367) 0:00:15.525 ********** 2026-03-01 00:45:46.364036 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364043 | orchestrator | 2026-03-01 00:45:46.364048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364081 | orchestrator | Sunday 01 March 2026 00:45:39 +0000 (0:00:00.203) 0:00:15.729 ********** 2026-03-01 00:45:46.364102 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364108 | orchestrator | 2026-03-01 00:45:46.364113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364119 | orchestrator | Sunday 01 March 2026 00:45:39 +0000 (0:00:00.197) 0:00:15.926 ********** 2026-03-01 00:45:46.364124 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364128 | orchestrator | 2026-03-01 00:45:46.364132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364135 | orchestrator | Sunday 01 March 2026 00:45:39 +0000 (0:00:00.233) 0:00:16.160 ********** 2026-03-01 00:45:46.364138 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364141 | orchestrator | 2026-03-01 00:45:46.364145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364149 | orchestrator | Sunday 01 March 2026 00:45:39 +0000 (0:00:00.208) 0:00:16.368 ********** 2026-03-01 00:45:46.364154 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364159 | orchestrator | 2026-03-01 00:45:46.364167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364172 | orchestrator | Sunday 01 March 2026 00:45:40 +0000 (0:00:00.602) 0:00:16.971 ********** 2026-03-01 00:45:46.364177 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364183 | orchestrator | 2026-03-01 00:45:46.364187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364193 | orchestrator | Sunday 01 March 2026 00:45:40 +0000 (0:00:00.211) 0:00:17.182 ********** 2026-03-01 00:45:46.364198 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364203 | orchestrator | 2026-03-01 00:45:46.364208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364212 | orchestrator | Sunday 01 March 2026 00:45:40 +0000 (0:00:00.201) 0:00:17.383 ********** 2026-03-01 00:45:46.364217 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364223 | orchestrator | 2026-03-01 00:45:46.364228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364233 | orchestrator | Sunday 01 March 2026 00:45:40 +0000 (0:00:00.205) 0:00:17.589 ********** 2026-03-01 00:45:46.364238 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060) 2026-03-01 00:45:46.364245 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060) 2026-03-01 00:45:46.364250 | orchestrator | 2026-03-01 00:45:46.364264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364270 | orchestrator | Sunday 01 March 2026 00:45:41 +0000 (0:00:00.432) 0:00:18.021 ********** 2026-03-01 00:45:46.364276 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389) 2026-03-01 00:45:46.364281 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389) 2026-03-01 00:45:46.364286 | orchestrator | 2026-03-01 00:45:46.364291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364297 | orchestrator | Sunday 01 March 2026 00:45:41 +0000 (0:00:00.437) 0:00:18.459 ********** 2026-03-01 00:45:46.364302 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6) 2026-03-01 00:45:46.364307 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6) 2026-03-01 00:45:46.364313 | orchestrator | 2026-03-01 00:45:46.364317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364339 | orchestrator | Sunday 01 March 2026 00:45:42 +0000 (0:00:00.449) 0:00:18.908 ********** 2026-03-01 00:45:46.364343 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c) 2026-03-01 00:45:46.364350 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c) 2026-03-01 00:45:46.364354 | orchestrator | 2026-03-01 00:45:46.364362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:46.364366 | orchestrator | Sunday 01 March 2026 00:45:42 +0000 (0:00:00.453) 0:00:19.361 ********** 2026-03-01 00:45:46.364369 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-01 00:45:46.364372 | orchestrator | 2026-03-01 00:45:46.364375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364378 | orchestrator | Sunday 01 March 2026 00:45:43 +0000 (0:00:00.376) 0:00:19.738 ********** 2026-03-01 00:45:46.364381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-01 00:45:46.364385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-01 00:45:46.364388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-01 00:45:46.364391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-01 00:45:46.364395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-01 00:45:46.364400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-01 00:45:46.364404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-01 00:45:46.364409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-01 00:45:46.364413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-01 00:45:46.364418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-01 00:45:46.364422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-01 00:45:46.364427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-01 00:45:46.364431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-01 00:45:46.364436 | orchestrator | 2026-03-01 00:45:46.364441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364447 | orchestrator | Sunday 01 March 2026 00:45:43 +0000 (0:00:00.384) 0:00:20.122 ********** 2026-03-01 00:45:46.364452 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364457 | orchestrator | 2026-03-01 00:45:46.364462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364467 | orchestrator | Sunday 01 March 2026 00:45:43 +0000 (0:00:00.524) 0:00:20.647 ********** 2026-03-01 00:45:46.364470 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364474 | orchestrator | 2026-03-01 00:45:46.364479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364486 | orchestrator | Sunday 01 March 2026 00:45:44 +0000 (0:00:00.175) 0:00:20.822 ********** 2026-03-01 00:45:46.364492 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364497 | orchestrator | 2026-03-01 00:45:46.364501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364507 | orchestrator | Sunday 01 March 2026 00:45:44 +0000 (0:00:00.243) 0:00:21.065 ********** 2026-03-01 00:45:46.364512 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364517 | orchestrator | 2026-03-01 00:45:46.364522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364525 | orchestrator | Sunday 01 March 2026 00:45:44 +0000 (0:00:00.262) 0:00:21.328 ********** 2026-03-01 00:45:46.364528 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364531 | orchestrator | 2026-03-01 00:45:46.364534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364537 | orchestrator | Sunday 01 March 2026 00:45:44 +0000 (0:00:00.159) 0:00:21.488 ********** 2026-03-01 00:45:46.364540 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364547 | orchestrator | 2026-03-01 00:45:46.364553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364557 | orchestrator | Sunday 01 March 2026 00:45:44 +0000 (0:00:00.188) 0:00:21.676 ********** 2026-03-01 00:45:46.364560 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364563 | orchestrator | 2026-03-01 00:45:46.364566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364569 | orchestrator | Sunday 01 March 2026 00:45:45 +0000 (0:00:00.187) 0:00:21.864 ********** 2026-03-01 00:45:46.364572 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:46.364575 | orchestrator | 2026-03-01 00:45:46.364578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364581 | orchestrator | Sunday 01 March 2026 00:45:45 +0000 (0:00:00.253) 0:00:22.117 ********** 2026-03-01 00:45:46.364584 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-01 00:45:46.364588 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-01 00:45:46.364591 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-01 00:45:46.364594 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-01 00:45:46.364598 | orchestrator | 2026-03-01 00:45:46.364601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:46.364604 | orchestrator | Sunday 01 March 2026 00:45:46 +0000 (0:00:00.817) 0:00:22.935 ********** 2026-03-01 00:45:46.364607 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.566865 | orchestrator | 2026-03-01 00:45:51.566924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:51.566933 | orchestrator | Sunday 01 March 2026 00:45:46 +0000 (0:00:00.191) 0:00:23.126 ********** 2026-03-01 00:45:51.566939 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.566944 | orchestrator | 2026-03-01 00:45:51.566950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:51.566955 | orchestrator | Sunday 01 March 2026 00:45:46 +0000 (0:00:00.178) 0:00:23.304 ********** 2026-03-01 00:45:51.566960 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.566965 | orchestrator | 2026-03-01 00:45:51.566970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:51.566976 | orchestrator | Sunday 01 March 2026 00:45:46 +0000 (0:00:00.181) 0:00:23.486 ********** 2026-03-01 00:45:51.566981 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.566986 | orchestrator | 2026-03-01 00:45:51.566991 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-01 00:45:51.566996 | orchestrator | Sunday 01 March 2026 00:45:47 +0000 (0:00:00.472) 0:00:23.959 ********** 2026-03-01 00:45:51.567001 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-01 00:45:51.567006 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-01 00:45:51.567011 | orchestrator | 2026-03-01 00:45:51.567016 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-01 00:45:51.567022 | orchestrator | Sunday 01 March 2026 00:45:47 +0000 (0:00:00.151) 0:00:24.111 ********** 2026-03-01 00:45:51.567027 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567032 | orchestrator | 2026-03-01 00:45:51.567037 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-01 00:45:51.567042 | orchestrator | Sunday 01 March 2026 00:45:47 +0000 (0:00:00.121) 0:00:24.232 ********** 2026-03-01 00:45:51.567047 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567052 | orchestrator | 2026-03-01 00:45:51.567057 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-01 00:45:51.567062 | orchestrator | Sunday 01 March 2026 00:45:47 +0000 (0:00:00.133) 0:00:24.366 ********** 2026-03-01 00:45:51.567093 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567103 | orchestrator | 2026-03-01 00:45:51.567113 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-01 00:45:51.567123 | orchestrator | Sunday 01 March 2026 00:45:47 +0000 (0:00:00.110) 0:00:24.477 ********** 2026-03-01 00:45:51.567145 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:45:51.567151 | orchestrator | 2026-03-01 00:45:51.567156 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-01 00:45:51.567162 | orchestrator | Sunday 01 March 2026 00:45:47 +0000 (0:00:00.114) 0:00:24.591 ********** 2026-03-01 00:45:51.567167 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '024d169c-08bb-513a-b447-fe5a7c318e63'}}) 2026-03-01 00:45:51.567173 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b33a93dc-e50a-56e8-9161-d310a7d41007'}}) 2026-03-01 00:45:51.567178 | orchestrator | 2026-03-01 00:45:51.567183 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-01 00:45:51.567191 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.136) 0:00:24.728 ********** 2026-03-01 00:45:51.567200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '024d169c-08bb-513a-b447-fe5a7c318e63'}})  2026-03-01 00:45:51.567210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b33a93dc-e50a-56e8-9161-d310a7d41007'}})  2026-03-01 00:45:51.567218 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567226 | orchestrator | 2026-03-01 00:45:51.567233 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-01 00:45:51.567241 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.108) 0:00:24.836 ********** 2026-03-01 00:45:51.567249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '024d169c-08bb-513a-b447-fe5a7c318e63'}})  2026-03-01 00:45:51.567257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b33a93dc-e50a-56e8-9161-d310a7d41007'}})  2026-03-01 00:45:51.567264 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567272 | orchestrator | 2026-03-01 00:45:51.567280 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-01 00:45:51.567287 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.123) 0:00:24.960 ********** 2026-03-01 00:45:51.567295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '024d169c-08bb-513a-b447-fe5a7c318e63'}})  2026-03-01 00:45:51.567303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b33a93dc-e50a-56e8-9161-d310a7d41007'}})  2026-03-01 00:45:51.567312 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567321 | orchestrator | 2026-03-01 00:45:51.567342 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-01 00:45:51.567348 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.106) 0:00:25.067 ********** 2026-03-01 00:45:51.567353 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:45:51.567358 | orchestrator | 2026-03-01 00:45:51.567377 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-01 00:45:51.567382 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.096) 0:00:25.163 ********** 2026-03-01 00:45:51.567387 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:45:51.567392 | orchestrator | 2026-03-01 00:45:51.567397 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-01 00:45:51.567403 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.095) 0:00:25.259 ********** 2026-03-01 00:45:51.567418 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567424 | orchestrator | 2026-03-01 00:45:51.567429 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-01 00:45:51.567434 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.214) 0:00:25.473 ********** 2026-03-01 00:45:51.567439 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567444 | orchestrator | 2026-03-01 00:45:51.567449 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-01 00:45:51.567455 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.095) 0:00:25.569 ********** 2026-03-01 00:45:51.567461 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567473 | orchestrator | 2026-03-01 00:45:51.567479 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-01 00:45:51.567485 | orchestrator | Sunday 01 March 2026 00:45:48 +0000 (0:00:00.100) 0:00:25.670 ********** 2026-03-01 00:45:51.567491 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 00:45:51.567497 | orchestrator |  "ceph_osd_devices": { 2026-03-01 00:45:51.567503 | orchestrator |  "sdb": { 2026-03-01 00:45:51.567509 | orchestrator |  "osd_lvm_uuid": "024d169c-08bb-513a-b447-fe5a7c318e63" 2026-03-01 00:45:51.567515 | orchestrator |  }, 2026-03-01 00:45:51.567521 | orchestrator |  "sdc": { 2026-03-01 00:45:51.567527 | orchestrator |  "osd_lvm_uuid": "b33a93dc-e50a-56e8-9161-d310a7d41007" 2026-03-01 00:45:51.567533 | orchestrator |  } 2026-03-01 00:45:51.567539 | orchestrator |  } 2026-03-01 00:45:51.567545 | orchestrator | } 2026-03-01 00:45:51.567551 | orchestrator | 2026-03-01 00:45:51.567556 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-01 00:45:51.567563 | orchestrator | Sunday 01 March 2026 00:45:49 +0000 (0:00:00.101) 0:00:25.772 ********** 2026-03-01 00:45:51.567568 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567574 | orchestrator | 2026-03-01 00:45:51.567580 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-01 00:45:51.567585 | orchestrator | Sunday 01 March 2026 00:45:49 +0000 (0:00:00.104) 0:00:25.876 ********** 2026-03-01 00:45:51.567591 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567597 | orchestrator | 2026-03-01 00:45:51.567603 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-01 00:45:51.567609 | orchestrator | Sunday 01 March 2026 00:45:49 +0000 (0:00:00.115) 0:00:25.992 ********** 2026-03-01 00:45:51.567614 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:45:51.567620 | orchestrator | 2026-03-01 00:45:51.567626 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-01 00:45:51.567632 | orchestrator | Sunday 01 March 2026 00:45:49 +0000 (0:00:00.113) 0:00:26.106 ********** 2026-03-01 00:45:51.567638 | orchestrator | changed: [testbed-node-4] => { 2026-03-01 00:45:51.567644 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-01 00:45:51.567650 | orchestrator |  "ceph_osd_devices": { 2026-03-01 00:45:51.567657 | orchestrator |  "sdb": { 2026-03-01 00:45:51.567663 | orchestrator |  "osd_lvm_uuid": "024d169c-08bb-513a-b447-fe5a7c318e63" 2026-03-01 00:45:51.567669 | orchestrator |  }, 2026-03-01 00:45:51.567675 | orchestrator |  "sdc": { 2026-03-01 00:45:51.567680 | orchestrator |  "osd_lvm_uuid": "b33a93dc-e50a-56e8-9161-d310a7d41007" 2026-03-01 00:45:51.567686 | orchestrator |  } 2026-03-01 00:45:51.567692 | orchestrator |  }, 2026-03-01 00:45:51.567698 | orchestrator |  "lvm_volumes": [ 2026-03-01 00:45:51.567704 | orchestrator |  { 2026-03-01 00:45:51.567710 | orchestrator |  "data": "osd-block-024d169c-08bb-513a-b447-fe5a7c318e63", 2026-03-01 00:45:51.567716 | orchestrator |  "data_vg": "ceph-024d169c-08bb-513a-b447-fe5a7c318e63" 2026-03-01 00:45:51.567722 | orchestrator |  }, 2026-03-01 00:45:51.567727 | orchestrator |  { 2026-03-01 00:45:51.567733 | orchestrator |  "data": "osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007", 2026-03-01 00:45:51.567739 | orchestrator |  "data_vg": "ceph-b33a93dc-e50a-56e8-9161-d310a7d41007" 2026-03-01 00:45:51.567745 | orchestrator |  } 2026-03-01 00:45:51.567751 | orchestrator |  ] 2026-03-01 00:45:51.567756 | orchestrator |  } 2026-03-01 00:45:51.567762 | orchestrator | } 2026-03-01 00:45:51.567768 | orchestrator | 2026-03-01 00:45:51.567774 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-01 00:45:51.567781 | orchestrator | Sunday 01 March 2026 00:45:49 +0000 (0:00:00.152) 0:00:26.258 ********** 2026-03-01 00:45:51.567786 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-01 00:45:51.567794 | orchestrator | 2026-03-01 00:45:51.567807 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-01 00:45:51.567816 | orchestrator | 2026-03-01 00:45:51.567825 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-01 00:45:51.567833 | orchestrator | Sunday 01 March 2026 00:45:50 +0000 (0:00:00.935) 0:00:27.194 ********** 2026-03-01 00:45:51.567842 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-01 00:45:51.567851 | orchestrator | 2026-03-01 00:45:51.567860 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-01 00:45:51.567869 | orchestrator | Sunday 01 March 2026 00:45:51 +0000 (0:00:00.537) 0:00:27.732 ********** 2026-03-01 00:45:51.567886 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:45:51.567897 | orchestrator | 2026-03-01 00:45:51.567903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:51.567918 | orchestrator | Sunday 01 March 2026 00:45:51 +0000 (0:00:00.229) 0:00:27.962 ********** 2026-03-01 00:45:51.567924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-01 00:45:51.567929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-01 00:45:51.567934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-01 00:45:51.567939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-01 00:45:51.567944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-01 00:45:51.567955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-01 00:45:59.251797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-01 00:45:59.251891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-01 00:45:59.251906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-01 00:45:59.251916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-01 00:45:59.251944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-01 00:45:59.251954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-01 00:45:59.251963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-01 00:45:59.251973 | orchestrator | 2026-03-01 00:45:59.251984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.251995 | orchestrator | Sunday 01 March 2026 00:45:51 +0000 (0:00:00.361) 0:00:28.323 ********** 2026-03-01 00:45:59.252004 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252015 | orchestrator | 2026-03-01 00:45:59.252024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252033 | orchestrator | Sunday 01 March 2026 00:45:51 +0000 (0:00:00.204) 0:00:28.528 ********** 2026-03-01 00:45:59.252043 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252052 | orchestrator | 2026-03-01 00:45:59.252061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252070 | orchestrator | Sunday 01 March 2026 00:45:52 +0000 (0:00:00.188) 0:00:28.716 ********** 2026-03-01 00:45:59.252079 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252111 | orchestrator | 2026-03-01 00:45:59.252120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252129 | orchestrator | Sunday 01 March 2026 00:45:52 +0000 (0:00:00.180) 0:00:28.897 ********** 2026-03-01 00:45:59.252144 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252153 | orchestrator | 2026-03-01 00:45:59.252162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252171 | orchestrator | Sunday 01 March 2026 00:45:52 +0000 (0:00:00.199) 0:00:29.097 ********** 2026-03-01 00:45:59.252203 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252212 | orchestrator | 2026-03-01 00:45:59.252221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252231 | orchestrator | Sunday 01 March 2026 00:45:52 +0000 (0:00:00.200) 0:00:29.297 ********** 2026-03-01 00:45:59.252240 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252250 | orchestrator | 2026-03-01 00:45:59.252259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252268 | orchestrator | Sunday 01 March 2026 00:45:52 +0000 (0:00:00.190) 0:00:29.488 ********** 2026-03-01 00:45:59.252278 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252287 | orchestrator | 2026-03-01 00:45:59.252297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252306 | orchestrator | Sunday 01 March 2026 00:45:52 +0000 (0:00:00.164) 0:00:29.652 ********** 2026-03-01 00:45:59.252316 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252326 | orchestrator | 2026-03-01 00:45:59.252335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252345 | orchestrator | Sunday 01 March 2026 00:45:53 +0000 (0:00:00.185) 0:00:29.837 ********** 2026-03-01 00:45:59.252356 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e) 2026-03-01 00:45:59.252369 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e) 2026-03-01 00:45:59.252379 | orchestrator | 2026-03-01 00:45:59.252389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252400 | orchestrator | Sunday 01 March 2026 00:45:53 +0000 (0:00:00.700) 0:00:30.537 ********** 2026-03-01 00:45:59.252408 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa) 2026-03-01 00:45:59.252415 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa) 2026-03-01 00:45:59.252421 | orchestrator | 2026-03-01 00:45:59.252428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252435 | orchestrator | Sunday 01 March 2026 00:45:54 +0000 (0:00:00.468) 0:00:31.006 ********** 2026-03-01 00:45:59.252442 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7) 2026-03-01 00:45:59.252449 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7) 2026-03-01 00:45:59.252455 | orchestrator | 2026-03-01 00:45:59.252462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252469 | orchestrator | Sunday 01 March 2026 00:45:54 +0000 (0:00:00.444) 0:00:31.450 ********** 2026-03-01 00:45:59.252475 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c) 2026-03-01 00:45:59.252482 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c) 2026-03-01 00:45:59.252489 | orchestrator | 2026-03-01 00:45:59.252495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:45:59.252502 | orchestrator | Sunday 01 March 2026 00:45:55 +0000 (0:00:00.456) 0:00:31.907 ********** 2026-03-01 00:45:59.252508 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-01 00:45:59.252515 | orchestrator | 2026-03-01 00:45:59.252522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252546 | orchestrator | Sunday 01 March 2026 00:45:55 +0000 (0:00:00.372) 0:00:32.280 ********** 2026-03-01 00:45:59.252553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-01 00:45:59.252560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-01 00:45:59.252567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-01 00:45:59.252574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-01 00:45:59.252588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-01 00:45:59.252595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-01 00:45:59.252601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-01 00:45:59.252608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-01 00:45:59.252614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-01 00:45:59.252621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-01 00:45:59.252627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-01 00:45:59.252634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-01 00:45:59.252640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-01 00:45:59.252647 | orchestrator | 2026-03-01 00:45:59.252654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252660 | orchestrator | Sunday 01 March 2026 00:45:55 +0000 (0:00:00.390) 0:00:32.670 ********** 2026-03-01 00:45:59.252667 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252674 | orchestrator | 2026-03-01 00:45:59.252681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252687 | orchestrator | Sunday 01 March 2026 00:45:56 +0000 (0:00:00.177) 0:00:32.847 ********** 2026-03-01 00:45:59.252694 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252701 | orchestrator | 2026-03-01 00:45:59.252708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252714 | orchestrator | Sunday 01 March 2026 00:45:56 +0000 (0:00:00.209) 0:00:33.057 ********** 2026-03-01 00:45:59.252721 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252728 | orchestrator | 2026-03-01 00:45:59.252734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252746 | orchestrator | Sunday 01 March 2026 00:45:56 +0000 (0:00:00.170) 0:00:33.228 ********** 2026-03-01 00:45:59.252752 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252758 | orchestrator | 2026-03-01 00:45:59.252763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252769 | orchestrator | Sunday 01 March 2026 00:45:56 +0000 (0:00:00.190) 0:00:33.418 ********** 2026-03-01 00:45:59.252775 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252780 | orchestrator | 2026-03-01 00:45:59.252786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252792 | orchestrator | Sunday 01 March 2026 00:45:56 +0000 (0:00:00.194) 0:00:33.613 ********** 2026-03-01 00:45:59.252798 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252803 | orchestrator | 2026-03-01 00:45:59.252809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252815 | orchestrator | Sunday 01 March 2026 00:45:57 +0000 (0:00:00.745) 0:00:34.358 ********** 2026-03-01 00:45:59.252820 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252826 | orchestrator | 2026-03-01 00:45:59.252832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252838 | orchestrator | Sunday 01 March 2026 00:45:57 +0000 (0:00:00.214) 0:00:34.573 ********** 2026-03-01 00:45:59.252843 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252849 | orchestrator | 2026-03-01 00:45:59.252855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252860 | orchestrator | Sunday 01 March 2026 00:45:58 +0000 (0:00:00.189) 0:00:34.763 ********** 2026-03-01 00:45:59.252866 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-01 00:45:59.252877 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-01 00:45:59.252883 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-01 00:45:59.252888 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-01 00:45:59.252894 | orchestrator | 2026-03-01 00:45:59.252900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252905 | orchestrator | Sunday 01 March 2026 00:45:58 +0000 (0:00:00.574) 0:00:35.337 ********** 2026-03-01 00:45:59.252911 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252917 | orchestrator | 2026-03-01 00:45:59.252923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252929 | orchestrator | Sunday 01 March 2026 00:45:58 +0000 (0:00:00.150) 0:00:35.487 ********** 2026-03-01 00:45:59.252934 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252940 | orchestrator | 2026-03-01 00:45:59.252945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252951 | orchestrator | Sunday 01 March 2026 00:45:58 +0000 (0:00:00.148) 0:00:35.636 ********** 2026-03-01 00:45:59.252957 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252962 | orchestrator | 2026-03-01 00:45:59.252968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:45:59.252974 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.153) 0:00:35.789 ********** 2026-03-01 00:45:59.252980 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:45:59.252985 | orchestrator | 2026-03-01 00:45:59.252995 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-01 00:46:02.569650 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.145) 0:00:35.935 ********** 2026-03-01 00:46:02.569743 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-01 00:46:02.569754 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-01 00:46:02.569760 | orchestrator | 2026-03-01 00:46:02.569768 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-01 00:46:02.569775 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.131) 0:00:36.067 ********** 2026-03-01 00:46:02.569783 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.569790 | orchestrator | 2026-03-01 00:46:02.569795 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-01 00:46:02.569801 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.098) 0:00:36.165 ********** 2026-03-01 00:46:02.569808 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.569814 | orchestrator | 2026-03-01 00:46:02.569820 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-01 00:46:02.569826 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.097) 0:00:36.262 ********** 2026-03-01 00:46:02.569833 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.569839 | orchestrator | 2026-03-01 00:46:02.569845 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-01 00:46:02.569851 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.252) 0:00:36.515 ********** 2026-03-01 00:46:02.569857 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:46:02.569863 | orchestrator | 2026-03-01 00:46:02.569868 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-01 00:46:02.569875 | orchestrator | Sunday 01 March 2026 00:45:59 +0000 (0:00:00.104) 0:00:36.619 ********** 2026-03-01 00:46:02.569881 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}}) 2026-03-01 00:46:02.569888 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1a7437a-a9c6-5afd-b028-da6f65a62b89'}}) 2026-03-01 00:46:02.569894 | orchestrator | 2026-03-01 00:46:02.569900 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-01 00:46:02.569906 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.122) 0:00:36.742 ********** 2026-03-01 00:46:02.569913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}})  2026-03-01 00:46:02.569943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1a7437a-a9c6-5afd-b028-da6f65a62b89'}})  2026-03-01 00:46:02.569950 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.569956 | orchestrator | 2026-03-01 00:46:02.569962 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-01 00:46:02.569968 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.111) 0:00:36.854 ********** 2026-03-01 00:46:02.569974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}})  2026-03-01 00:46:02.569980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1a7437a-a9c6-5afd-b028-da6f65a62b89'}})  2026-03-01 00:46:02.569987 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.569993 | orchestrator | 2026-03-01 00:46:02.570000 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-01 00:46:02.570006 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.129) 0:00:36.983 ********** 2026-03-01 00:46:02.570012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}})  2026-03-01 00:46:02.570069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1a7437a-a9c6-5afd-b028-da6f65a62b89'}})  2026-03-01 00:46:02.570076 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570082 | orchestrator | 2026-03-01 00:46:02.570088 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-01 00:46:02.570149 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.115) 0:00:37.099 ********** 2026-03-01 00:46:02.570156 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:46:02.570162 | orchestrator | 2026-03-01 00:46:02.570168 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-01 00:46:02.570175 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.106) 0:00:37.205 ********** 2026-03-01 00:46:02.570182 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:46:02.570189 | orchestrator | 2026-03-01 00:46:02.570207 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-01 00:46:02.570222 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.102) 0:00:37.307 ********** 2026-03-01 00:46:02.570229 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570236 | orchestrator | 2026-03-01 00:46:02.570243 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-01 00:46:02.570249 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.099) 0:00:37.407 ********** 2026-03-01 00:46:02.570256 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570263 | orchestrator | 2026-03-01 00:46:02.570271 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-01 00:46:02.570282 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.120) 0:00:37.527 ********** 2026-03-01 00:46:02.570294 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570302 | orchestrator | 2026-03-01 00:46:02.570310 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-01 00:46:02.570318 | orchestrator | Sunday 01 March 2026 00:46:00 +0000 (0:00:00.131) 0:00:37.659 ********** 2026-03-01 00:46:02.570326 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 00:46:02.570333 | orchestrator |  "ceph_osd_devices": { 2026-03-01 00:46:02.570340 | orchestrator |  "sdb": { 2026-03-01 00:46:02.570365 | orchestrator |  "osd_lvm_uuid": "14f5527d-3d57-5d3d-81f7-fd6f0358fc1d" 2026-03-01 00:46:02.570374 | orchestrator |  }, 2026-03-01 00:46:02.570382 | orchestrator |  "sdc": { 2026-03-01 00:46:02.570405 | orchestrator |  "osd_lvm_uuid": "d1a7437a-a9c6-5afd-b028-da6f65a62b89" 2026-03-01 00:46:02.570412 | orchestrator |  } 2026-03-01 00:46:02.570419 | orchestrator |  } 2026-03-01 00:46:02.570427 | orchestrator | } 2026-03-01 00:46:02.570434 | orchestrator | 2026-03-01 00:46:02.570451 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-01 00:46:02.570459 | orchestrator | Sunday 01 March 2026 00:46:01 +0000 (0:00:00.130) 0:00:37.789 ********** 2026-03-01 00:46:02.570466 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570474 | orchestrator | 2026-03-01 00:46:02.570481 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-01 00:46:02.570487 | orchestrator | Sunday 01 March 2026 00:46:01 +0000 (0:00:00.350) 0:00:38.140 ********** 2026-03-01 00:46:02.570494 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570501 | orchestrator | 2026-03-01 00:46:02.570509 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-01 00:46:02.570516 | orchestrator | Sunday 01 March 2026 00:46:01 +0000 (0:00:00.112) 0:00:38.252 ********** 2026-03-01 00:46:02.570523 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:46:02.570529 | orchestrator | 2026-03-01 00:46:02.570536 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-01 00:46:02.570543 | orchestrator | Sunday 01 March 2026 00:46:01 +0000 (0:00:00.103) 0:00:38.356 ********** 2026-03-01 00:46:02.570550 | orchestrator | changed: [testbed-node-5] => { 2026-03-01 00:46:02.570557 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-01 00:46:02.570564 | orchestrator |  "ceph_osd_devices": { 2026-03-01 00:46:02.570571 | orchestrator |  "sdb": { 2026-03-01 00:46:02.570578 | orchestrator |  "osd_lvm_uuid": "14f5527d-3d57-5d3d-81f7-fd6f0358fc1d" 2026-03-01 00:46:02.570585 | orchestrator |  }, 2026-03-01 00:46:02.570592 | orchestrator |  "sdc": { 2026-03-01 00:46:02.570604 | orchestrator |  "osd_lvm_uuid": "d1a7437a-a9c6-5afd-b028-da6f65a62b89" 2026-03-01 00:46:02.570611 | orchestrator |  } 2026-03-01 00:46:02.570617 | orchestrator |  }, 2026-03-01 00:46:02.570625 | orchestrator |  "lvm_volumes": [ 2026-03-01 00:46:02.570632 | orchestrator |  { 2026-03-01 00:46:02.570639 | orchestrator |  "data": "osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d", 2026-03-01 00:46:02.570647 | orchestrator |  "data_vg": "ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d" 2026-03-01 00:46:02.570653 | orchestrator |  }, 2026-03-01 00:46:02.570664 | orchestrator |  { 2026-03-01 00:46:02.570671 | orchestrator |  "data": "osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89", 2026-03-01 00:46:02.570678 | orchestrator |  "data_vg": "ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89" 2026-03-01 00:46:02.570685 | orchestrator |  } 2026-03-01 00:46:02.570692 | orchestrator |  ] 2026-03-01 00:46:02.570700 | orchestrator |  } 2026-03-01 00:46:02.570706 | orchestrator | } 2026-03-01 00:46:02.570713 | orchestrator | 2026-03-01 00:46:02.570720 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-01 00:46:02.570727 | orchestrator | Sunday 01 March 2026 00:46:01 +0000 (0:00:00.167) 0:00:38.524 ********** 2026-03-01 00:46:02.570734 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-01 00:46:02.570741 | orchestrator | 2026-03-01 00:46:02.570748 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:46:02.570756 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 00:46:02.570764 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 00:46:02.570771 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 00:46:02.570778 | orchestrator | 2026-03-01 00:46:02.570785 | orchestrator | 2026-03-01 00:46:02.570792 | orchestrator | 2026-03-01 00:46:02.570799 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:46:02.570807 | orchestrator | Sunday 01 March 2026 00:46:02 +0000 (0:00:00.717) 0:00:39.242 ********** 2026-03-01 00:46:02.570819 | orchestrator | =============================================================================== 2026-03-01 00:46:02.570826 | orchestrator | Write configuration file ------------------------------------------------ 3.55s 2026-03-01 00:46:02.570834 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-03-01 00:46:02.570838 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-03-01 00:46:02.570841 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.02s 2026-03-01 00:46:02.570845 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-01 00:46:02.570849 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-03-01 00:46:02.570853 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-03-01 00:46:02.570856 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-01 00:46:02.570860 | orchestrator | Print configuration data ------------------------------------------------ 0.72s 2026-03-01 00:46:02.570864 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-01 00:46:02.570868 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-03-01 00:46:02.570871 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-01 00:46:02.570875 | orchestrator | Print WAL devices ------------------------------------------------------- 0.63s 2026-03-01 00:46:02.570884 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-03-01 00:46:02.836656 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.60s 2026-03-01 00:46:02.836748 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-01 00:46:02.836756 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-01 00:46:02.836762 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-03-01 00:46:02.836768 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.49s 2026-03-01 00:46:02.836774 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.48s 2026-03-01 00:46:25.585451 | orchestrator | 2026-03-01 00:46:25 | INFO  | Task 15de42c9-9a2c-4133-ab0c-5ec1206ef6c4 (sync inventory) is running in background. Output coming soon. 2026-03-01 00:46:51.460855 | orchestrator | 2026-03-01 00:46:27 | INFO  | Starting group_vars file reorganization 2026-03-01 00:46:51.460951 | orchestrator | 2026-03-01 00:46:27 | INFO  | Moved 0 file(s) to their respective directories 2026-03-01 00:46:51.460968 | orchestrator | 2026-03-01 00:46:27 | INFO  | Group_vars file reorganization completed 2026-03-01 00:46:51.460980 | orchestrator | 2026-03-01 00:46:30 | INFO  | Starting variable preparation from inventory 2026-03-01 00:46:51.460991 | orchestrator | 2026-03-01 00:46:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-01 00:46:51.461002 | orchestrator | 2026-03-01 00:46:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-01 00:46:51.461013 | orchestrator | 2026-03-01 00:46:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-01 00:46:51.461023 | orchestrator | 2026-03-01 00:46:32 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-01 00:46:51.461034 | orchestrator | 2026-03-01 00:46:32 | INFO  | Variable preparation completed 2026-03-01 00:46:51.461044 | orchestrator | 2026-03-01 00:46:34 | INFO  | Starting inventory overwrite handling 2026-03-01 00:46:51.461055 | orchestrator | 2026-03-01 00:46:34 | INFO  | Handling group overwrites in 99-overwrite 2026-03-01 00:46:51.461065 | orchestrator | 2026-03-01 00:46:34 | INFO  | Removing group frr:children from 60-generic 2026-03-01 00:46:51.461100 | orchestrator | 2026-03-01 00:46:34 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-01 00:46:51.461110 | orchestrator | 2026-03-01 00:46:34 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-01 00:46:51.461137 | orchestrator | 2026-03-01 00:46:34 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-01 00:46:51.461161 | orchestrator | 2026-03-01 00:46:34 | INFO  | Handling group overwrites in 20-roles 2026-03-01 00:46:51.461171 | orchestrator | 2026-03-01 00:46:34 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-01 00:46:51.461182 | orchestrator | 2026-03-01 00:46:34 | INFO  | Removed 5 group(s) in total 2026-03-01 00:46:51.461192 | orchestrator | 2026-03-01 00:46:34 | INFO  | Inventory overwrite handling completed 2026-03-01 00:46:51.461220 | orchestrator | 2026-03-01 00:46:35 | INFO  | Starting merge of inventory files 2026-03-01 00:46:51.461230 | orchestrator | 2026-03-01 00:46:35 | INFO  | Inventory files merged successfully 2026-03-01 00:46:51.461241 | orchestrator | 2026-03-01 00:46:40 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-01 00:46:51.461251 | orchestrator | 2026-03-01 00:46:50 | INFO  | Successfully wrote ClusterShell configuration 2026-03-01 00:46:51.461262 | orchestrator | [master c1914ca] 2026-03-01-00-46 2026-03-01 00:46:51.461274 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-01 00:46:53.243112 | orchestrator | 2026-03-01 00:46:53 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-01 00:46:53.286478 | orchestrator | 2026-03-01 00:46:53 | INFO  | Task 9722be36-2f92-47f5-a5a4-008e24dc9af5 (ceph-create-lvm-devices) was prepared for execution. 2026-03-01 00:46:53.286568 | orchestrator | 2026-03-01 00:46:53 | INFO  | It takes a moment until task 9722be36-2f92-47f5-a5a4-008e24dc9af5 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-01 00:47:03.922173 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-01 00:47:03.922358 | orchestrator | 2.16.14 2026-03-01 00:47:03.922379 | orchestrator | 2026-03-01 00:47:03.922392 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-01 00:47:03.922405 | orchestrator | 2026-03-01 00:47:03.922416 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-01 00:47:03.922428 | orchestrator | Sunday 01 March 2026 00:46:57 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-01 00:47:03.922440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-01 00:47:03.922452 | orchestrator | 2026-03-01 00:47:03.922463 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-01 00:47:03.922474 | orchestrator | Sunday 01 March 2026 00:46:57 +0000 (0:00:00.234) 0:00:00.510 ********** 2026-03-01 00:47:03.922485 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:03.922496 | orchestrator | 2026-03-01 00:47:03.922507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.922518 | orchestrator | Sunday 01 March 2026 00:46:57 +0000 (0:00:00.205) 0:00:00.716 ********** 2026-03-01 00:47:03.922529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-01 00:47:03.922540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-01 00:47:03.922551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-01 00:47:03.922562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-01 00:47:03.922573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-01 00:47:03.922584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-01 00:47:03.922596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-01 00:47:03.922636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-01 00:47:03.922650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-01 00:47:03.922664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-01 00:47:03.922678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-01 00:47:03.922691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-01 00:47:03.922719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-01 00:47:03.922732 | orchestrator | 2026-03-01 00:47:03.922745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.922757 | orchestrator | Sunday 01 March 2026 00:46:58 +0000 (0:00:00.449) 0:00:01.165 ********** 2026-03-01 00:47:03.922770 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.922783 | orchestrator | 2026-03-01 00:47:03.922796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.922809 | orchestrator | Sunday 01 March 2026 00:46:58 +0000 (0:00:00.180) 0:00:01.345 ********** 2026-03-01 00:47:03.922821 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.922834 | orchestrator | 2026-03-01 00:47:03.922847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.922860 | orchestrator | Sunday 01 March 2026 00:46:58 +0000 (0:00:00.202) 0:00:01.548 ********** 2026-03-01 00:47:03.922873 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.922885 | orchestrator | 2026-03-01 00:47:03.922898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.922912 | orchestrator | Sunday 01 March 2026 00:46:58 +0000 (0:00:00.213) 0:00:01.762 ********** 2026-03-01 00:47:03.922925 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.922938 | orchestrator | 2026-03-01 00:47:03.922954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.922973 | orchestrator | Sunday 01 March 2026 00:46:58 +0000 (0:00:00.204) 0:00:01.967 ********** 2026-03-01 00:47:03.922991 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923010 | orchestrator | 2026-03-01 00:47:03.923031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923050 | orchestrator | Sunday 01 March 2026 00:46:59 +0000 (0:00:00.189) 0:00:02.157 ********** 2026-03-01 00:47:03.923067 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923087 | orchestrator | 2026-03-01 00:47:03.923099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923109 | orchestrator | Sunday 01 March 2026 00:46:59 +0000 (0:00:00.230) 0:00:02.387 ********** 2026-03-01 00:47:03.923120 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923130 | orchestrator | 2026-03-01 00:47:03.923141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923152 | orchestrator | Sunday 01 March 2026 00:46:59 +0000 (0:00:00.197) 0:00:02.584 ********** 2026-03-01 00:47:03.923162 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923174 | orchestrator | 2026-03-01 00:47:03.923185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923195 | orchestrator | Sunday 01 March 2026 00:46:59 +0000 (0:00:00.173) 0:00:02.757 ********** 2026-03-01 00:47:03.923206 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6) 2026-03-01 00:47:03.923219 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6) 2026-03-01 00:47:03.923265 | orchestrator | 2026-03-01 00:47:03.923287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923330 | orchestrator | Sunday 01 March 2026 00:47:00 +0000 (0:00:00.374) 0:00:03.131 ********** 2026-03-01 00:47:03.923367 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0) 2026-03-01 00:47:03.923379 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0) 2026-03-01 00:47:03.923390 | orchestrator | 2026-03-01 00:47:03.923401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923412 | orchestrator | Sunday 01 March 2026 00:47:00 +0000 (0:00:00.572) 0:00:03.704 ********** 2026-03-01 00:47:03.923423 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec) 2026-03-01 00:47:03.923434 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec) 2026-03-01 00:47:03.923445 | orchestrator | 2026-03-01 00:47:03.923455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923466 | orchestrator | Sunday 01 March 2026 00:47:01 +0000 (0:00:00.531) 0:00:04.235 ********** 2026-03-01 00:47:03.923477 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17) 2026-03-01 00:47:03.923488 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17) 2026-03-01 00:47:03.923499 | orchestrator | 2026-03-01 00:47:03.923509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:03.923520 | orchestrator | Sunday 01 March 2026 00:47:01 +0000 (0:00:00.672) 0:00:04.908 ********** 2026-03-01 00:47:03.923530 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-01 00:47:03.923541 | orchestrator | 2026-03-01 00:47:03.923552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923563 | orchestrator | Sunday 01 March 2026 00:47:02 +0000 (0:00:00.340) 0:00:05.248 ********** 2026-03-01 00:47:03.923573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-01 00:47:03.923584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-01 00:47:03.923595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-01 00:47:03.923605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-01 00:47:03.923616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-01 00:47:03.923627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-01 00:47:03.923638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-01 00:47:03.923649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-01 00:47:03.923659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-01 00:47:03.923670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-01 00:47:03.923681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-01 00:47:03.923691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-01 00:47:03.923702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-01 00:47:03.923713 | orchestrator | 2026-03-01 00:47:03.923724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923735 | orchestrator | Sunday 01 March 2026 00:47:02 +0000 (0:00:00.382) 0:00:05.630 ********** 2026-03-01 00:47:03.923745 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923756 | orchestrator | 2026-03-01 00:47:03.923767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923778 | orchestrator | Sunday 01 March 2026 00:47:02 +0000 (0:00:00.198) 0:00:05.829 ********** 2026-03-01 00:47:03.923796 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923807 | orchestrator | 2026-03-01 00:47:03.923818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923828 | orchestrator | Sunday 01 March 2026 00:47:03 +0000 (0:00:00.194) 0:00:06.023 ********** 2026-03-01 00:47:03.923839 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923850 | orchestrator | 2026-03-01 00:47:03.923861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923871 | orchestrator | Sunday 01 March 2026 00:47:03 +0000 (0:00:00.171) 0:00:06.194 ********** 2026-03-01 00:47:03.923882 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923893 | orchestrator | 2026-03-01 00:47:03.923904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923914 | orchestrator | Sunday 01 March 2026 00:47:03 +0000 (0:00:00.185) 0:00:06.380 ********** 2026-03-01 00:47:03.923925 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923936 | orchestrator | 2026-03-01 00:47:03.923946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.923965 | orchestrator | Sunday 01 March 2026 00:47:03 +0000 (0:00:00.177) 0:00:06.558 ********** 2026-03-01 00:47:03.923977 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.923989 | orchestrator | 2026-03-01 00:47:03.924007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:03.924027 | orchestrator | Sunday 01 March 2026 00:47:03 +0000 (0:00:00.184) 0:00:06.742 ********** 2026-03-01 00:47:03.924045 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:03.924065 | orchestrator | 2026-03-01 00:47:03.924127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:11.828615 | orchestrator | Sunday 01 March 2026 00:47:03 +0000 (0:00:00.195) 0:00:06.938 ********** 2026-03-01 00:47:11.828696 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.828704 | orchestrator | 2026-03-01 00:47:11.828710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:11.828718 | orchestrator | Sunday 01 March 2026 00:47:04 +0000 (0:00:00.191) 0:00:07.130 ********** 2026-03-01 00:47:11.828726 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-01 00:47:11.828734 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-01 00:47:11.828742 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-01 00:47:11.828750 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-01 00:47:11.828757 | orchestrator | 2026-03-01 00:47:11.828766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:11.828773 | orchestrator | Sunday 01 March 2026 00:47:04 +0000 (0:00:00.888) 0:00:08.018 ********** 2026-03-01 00:47:11.828781 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.828788 | orchestrator | 2026-03-01 00:47:11.828796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:11.828804 | orchestrator | Sunday 01 March 2026 00:47:05 +0000 (0:00:00.184) 0:00:08.202 ********** 2026-03-01 00:47:11.828811 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.828819 | orchestrator | 2026-03-01 00:47:11.828828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:11.828836 | orchestrator | Sunday 01 March 2026 00:47:05 +0000 (0:00:00.179) 0:00:08.382 ********** 2026-03-01 00:47:11.828844 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.828852 | orchestrator | 2026-03-01 00:47:11.828859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:11.828867 | orchestrator | Sunday 01 March 2026 00:47:05 +0000 (0:00:00.178) 0:00:08.561 ********** 2026-03-01 00:47:11.828875 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.828883 | orchestrator | 2026-03-01 00:47:11.828891 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-01 00:47:11.828900 | orchestrator | Sunday 01 March 2026 00:47:05 +0000 (0:00:00.183) 0:00:08.744 ********** 2026-03-01 00:47:11.828909 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.828945 | orchestrator | 2026-03-01 00:47:11.828954 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-01 00:47:11.828962 | orchestrator | Sunday 01 March 2026 00:47:05 +0000 (0:00:00.129) 0:00:08.873 ********** 2026-03-01 00:47:11.828971 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31f22992-0e1a-5ef5-a8b3-14a12910c272'}}) 2026-03-01 00:47:11.828980 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}}) 2026-03-01 00:47:11.828987 | orchestrator | 2026-03-01 00:47:11.829009 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-01 00:47:11.829018 | orchestrator | Sunday 01 March 2026 00:47:06 +0000 (0:00:00.178) 0:00:09.052 ********** 2026-03-01 00:47:11.829027 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'}) 2026-03-01 00:47:11.829036 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}) 2026-03-01 00:47:11.829043 | orchestrator | 2026-03-01 00:47:11.829051 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-01 00:47:11.829058 | orchestrator | Sunday 01 March 2026 00:47:08 +0000 (0:00:02.146) 0:00:11.199 ********** 2026-03-01 00:47:11.829066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829075 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829082 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829089 | orchestrator | 2026-03-01 00:47:11.829097 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-01 00:47:11.829104 | orchestrator | Sunday 01 March 2026 00:47:08 +0000 (0:00:00.175) 0:00:11.374 ********** 2026-03-01 00:47:11.829112 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'}) 2026-03-01 00:47:11.829119 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}) 2026-03-01 00:47:11.829127 | orchestrator | 2026-03-01 00:47:11.829134 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-01 00:47:11.829142 | orchestrator | Sunday 01 March 2026 00:47:09 +0000 (0:00:01.508) 0:00:12.883 ********** 2026-03-01 00:47:11.829149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829164 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829171 | orchestrator | 2026-03-01 00:47:11.829180 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-01 00:47:11.829187 | orchestrator | Sunday 01 March 2026 00:47:10 +0000 (0:00:00.155) 0:00:13.038 ********** 2026-03-01 00:47:11.829214 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829223 | orchestrator | 2026-03-01 00:47:11.829231 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-01 00:47:11.829274 | orchestrator | Sunday 01 March 2026 00:47:10 +0000 (0:00:00.136) 0:00:13.175 ********** 2026-03-01 00:47:11.829280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829285 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829298 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829304 | orchestrator | 2026-03-01 00:47:11.829309 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-01 00:47:11.829315 | orchestrator | Sunday 01 March 2026 00:47:10 +0000 (0:00:00.298) 0:00:13.474 ********** 2026-03-01 00:47:11.829320 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829325 | orchestrator | 2026-03-01 00:47:11.829330 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-01 00:47:11.829335 | orchestrator | Sunday 01 March 2026 00:47:10 +0000 (0:00:00.118) 0:00:13.592 ********** 2026-03-01 00:47:11.829340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829346 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829351 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829357 | orchestrator | 2026-03-01 00:47:11.829362 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-01 00:47:11.829367 | orchestrator | Sunday 01 March 2026 00:47:10 +0000 (0:00:00.171) 0:00:13.764 ********** 2026-03-01 00:47:11.829372 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829377 | orchestrator | 2026-03-01 00:47:11.829383 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-01 00:47:11.829388 | orchestrator | Sunday 01 March 2026 00:47:10 +0000 (0:00:00.138) 0:00:13.903 ********** 2026-03-01 00:47:11.829393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829404 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829409 | orchestrator | 2026-03-01 00:47:11.829414 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-01 00:47:11.829420 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.182) 0:00:14.085 ********** 2026-03-01 00:47:11.829425 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:11.829431 | orchestrator | 2026-03-01 00:47:11.829437 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-01 00:47:11.829442 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.147) 0:00:14.232 ********** 2026-03-01 00:47:11.829447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829453 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829458 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829463 | orchestrator | 2026-03-01 00:47:11.829469 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-01 00:47:11.829474 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.157) 0:00:14.390 ********** 2026-03-01 00:47:11.829479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829490 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829495 | orchestrator | 2026-03-01 00:47:11.829500 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-01 00:47:11.829509 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.153) 0:00:14.544 ********** 2026-03-01 00:47:11.829514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:11.829519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:11.829523 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829528 | orchestrator | 2026-03-01 00:47:11.829532 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-01 00:47:11.829537 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.158) 0:00:14.702 ********** 2026-03-01 00:47:11.829541 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:11.829546 | orchestrator | 2026-03-01 00:47:11.829551 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-01 00:47:11.829560 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.143) 0:00:14.846 ********** 2026-03-01 00:47:17.999773 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:17.999851 | orchestrator | 2026-03-01 00:47:17.999858 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-01 00:47:17.999864 | orchestrator | Sunday 01 March 2026 00:47:11 +0000 (0:00:00.136) 0:00:14.982 ********** 2026-03-01 00:47:17.999869 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:17.999873 | orchestrator | 2026-03-01 00:47:17.999896 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-01 00:47:17.999901 | orchestrator | Sunday 01 March 2026 00:47:12 +0000 (0:00:00.135) 0:00:15.118 ********** 2026-03-01 00:47:17.999906 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 00:47:17.999910 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-01 00:47:17.999914 | orchestrator | } 2026-03-01 00:47:17.999919 | orchestrator | 2026-03-01 00:47:17.999936 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-01 00:47:17.999940 | orchestrator | Sunday 01 March 2026 00:47:12 +0000 (0:00:00.324) 0:00:15.442 ********** 2026-03-01 00:47:17.999944 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 00:47:17.999949 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-01 00:47:17.999953 | orchestrator | } 2026-03-01 00:47:17.999956 | orchestrator | 2026-03-01 00:47:17.999960 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-01 00:47:17.999964 | orchestrator | Sunday 01 March 2026 00:47:12 +0000 (0:00:00.140) 0:00:15.583 ********** 2026-03-01 00:47:17.999968 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 00:47:17.999972 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-01 00:47:17.999976 | orchestrator | } 2026-03-01 00:47:17.999980 | orchestrator | 2026-03-01 00:47:17.999984 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-01 00:47:17.999988 | orchestrator | Sunday 01 March 2026 00:47:12 +0000 (0:00:00.125) 0:00:15.708 ********** 2026-03-01 00:47:17.999991 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:17.999995 | orchestrator | 2026-03-01 00:47:18.000006 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-01 00:47:18.000010 | orchestrator | Sunday 01 March 2026 00:47:13 +0000 (0:00:00.659) 0:00:16.368 ********** 2026-03-01 00:47:18.000019 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:18.000023 | orchestrator | 2026-03-01 00:47:18.000027 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-01 00:47:18.000031 | orchestrator | Sunday 01 March 2026 00:47:13 +0000 (0:00:00.509) 0:00:16.877 ********** 2026-03-01 00:47:18.000035 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:18.000038 | orchestrator | 2026-03-01 00:47:18.000042 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-01 00:47:18.000046 | orchestrator | Sunday 01 March 2026 00:47:14 +0000 (0:00:00.539) 0:00:17.417 ********** 2026-03-01 00:47:18.000050 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:18.000053 | orchestrator | 2026-03-01 00:47:18.000073 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-01 00:47:18.000077 | orchestrator | Sunday 01 March 2026 00:47:14 +0000 (0:00:00.156) 0:00:17.573 ********** 2026-03-01 00:47:18.000081 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000085 | orchestrator | 2026-03-01 00:47:18.000089 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-01 00:47:18.000092 | orchestrator | Sunday 01 March 2026 00:47:14 +0000 (0:00:00.137) 0:00:17.710 ********** 2026-03-01 00:47:18.000096 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000100 | orchestrator | 2026-03-01 00:47:18.000104 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-01 00:47:18.000107 | orchestrator | Sunday 01 March 2026 00:47:14 +0000 (0:00:00.106) 0:00:17.817 ********** 2026-03-01 00:47:18.000111 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 00:47:18.000174 | orchestrator |  "vgs_report": { 2026-03-01 00:47:18.000179 | orchestrator |  "vg": [] 2026-03-01 00:47:18.000183 | orchestrator |  } 2026-03-01 00:47:18.000187 | orchestrator | } 2026-03-01 00:47:18.000191 | orchestrator | 2026-03-01 00:47:18.000194 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-01 00:47:18.000198 | orchestrator | Sunday 01 March 2026 00:47:14 +0000 (0:00:00.164) 0:00:17.981 ********** 2026-03-01 00:47:18.000202 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000205 | orchestrator | 2026-03-01 00:47:18.000209 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-01 00:47:18.000213 | orchestrator | Sunday 01 March 2026 00:47:15 +0000 (0:00:00.132) 0:00:18.114 ********** 2026-03-01 00:47:18.000217 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000220 | orchestrator | 2026-03-01 00:47:18.000224 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-01 00:47:18.000228 | orchestrator | Sunday 01 March 2026 00:47:15 +0000 (0:00:00.129) 0:00:18.243 ********** 2026-03-01 00:47:18.000232 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000236 | orchestrator | 2026-03-01 00:47:18.000244 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-01 00:47:18.000291 | orchestrator | Sunday 01 March 2026 00:47:15 +0000 (0:00:00.291) 0:00:18.535 ********** 2026-03-01 00:47:18.000296 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000300 | orchestrator | 2026-03-01 00:47:18.000305 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-01 00:47:18.000310 | orchestrator | Sunday 01 March 2026 00:47:15 +0000 (0:00:00.124) 0:00:18.660 ********** 2026-03-01 00:47:18.000314 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000319 | orchestrator | 2026-03-01 00:47:18.000323 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-01 00:47:18.000328 | orchestrator | Sunday 01 March 2026 00:47:15 +0000 (0:00:00.113) 0:00:18.773 ********** 2026-03-01 00:47:18.000333 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000337 | orchestrator | 2026-03-01 00:47:18.000342 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-01 00:47:18.000346 | orchestrator | Sunday 01 March 2026 00:47:15 +0000 (0:00:00.134) 0:00:18.908 ********** 2026-03-01 00:47:18.000351 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000355 | orchestrator | 2026-03-01 00:47:18.000360 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-01 00:47:18.000364 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.121) 0:00:19.030 ********** 2026-03-01 00:47:18.000379 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000384 | orchestrator | 2026-03-01 00:47:18.000389 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-01 00:47:18.000393 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.118) 0:00:19.148 ********** 2026-03-01 00:47:18.000398 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000402 | orchestrator | 2026-03-01 00:47:18.000407 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-01 00:47:18.000417 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.137) 0:00:19.285 ********** 2026-03-01 00:47:18.000421 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000425 | orchestrator | 2026-03-01 00:47:18.000430 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-01 00:47:18.000434 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.136) 0:00:19.422 ********** 2026-03-01 00:47:18.000439 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000443 | orchestrator | 2026-03-01 00:47:18.000448 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-01 00:47:18.000452 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.167) 0:00:19.589 ********** 2026-03-01 00:47:18.000456 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000461 | orchestrator | 2026-03-01 00:47:18.000465 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-01 00:47:18.000470 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.147) 0:00:19.736 ********** 2026-03-01 00:47:18.000474 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000478 | orchestrator | 2026-03-01 00:47:18.000483 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-01 00:47:18.000487 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.140) 0:00:19.877 ********** 2026-03-01 00:47:18.000492 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000496 | orchestrator | 2026-03-01 00:47:18.000501 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-01 00:47:18.000506 | orchestrator | Sunday 01 March 2026 00:47:16 +0000 (0:00:00.128) 0:00:20.006 ********** 2026-03-01 00:47:18.000512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:18.000518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:18.000523 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000527 | orchestrator | 2026-03-01 00:47:18.000532 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-01 00:47:18.000539 | orchestrator | Sunday 01 March 2026 00:47:17 +0000 (0:00:00.306) 0:00:20.313 ********** 2026-03-01 00:47:18.000543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:18.000546 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:18.000550 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000554 | orchestrator | 2026-03-01 00:47:18.000558 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-01 00:47:18.000562 | orchestrator | Sunday 01 March 2026 00:47:17 +0000 (0:00:00.173) 0:00:20.487 ********** 2026-03-01 00:47:18.000565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:18.000569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:18.000573 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000577 | orchestrator | 2026-03-01 00:47:18.000580 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-01 00:47:18.000584 | orchestrator | Sunday 01 March 2026 00:47:17 +0000 (0:00:00.152) 0:00:20.639 ********** 2026-03-01 00:47:18.000588 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:18.000592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:18.000600 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000604 | orchestrator | 2026-03-01 00:47:18.000607 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-01 00:47:18.000611 | orchestrator | Sunday 01 March 2026 00:47:17 +0000 (0:00:00.148) 0:00:20.787 ********** 2026-03-01 00:47:18.000615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:18.000619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:18.000622 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:18.000626 | orchestrator | 2026-03-01 00:47:18.000630 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-01 00:47:18.000634 | orchestrator | Sunday 01 March 2026 00:47:17 +0000 (0:00:00.165) 0:00:20.953 ********** 2026-03-01 00:47:18.000640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:22.906458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:22.906565 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:22.906584 | orchestrator | 2026-03-01 00:47:22.906598 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-01 00:47:22.906613 | orchestrator | Sunday 01 March 2026 00:47:18 +0000 (0:00:00.162) 0:00:21.115 ********** 2026-03-01 00:47:22.906627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:22.906640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:22.906651 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:22.906664 | orchestrator | 2026-03-01 00:47:22.906675 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-01 00:47:22.906687 | orchestrator | Sunday 01 March 2026 00:47:18 +0000 (0:00:00.166) 0:00:21.282 ********** 2026-03-01 00:47:22.906699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:22.906709 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:22.906716 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:22.906723 | orchestrator | 2026-03-01 00:47:22.906730 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-01 00:47:22.906737 | orchestrator | Sunday 01 March 2026 00:47:18 +0000 (0:00:00.129) 0:00:21.411 ********** 2026-03-01 00:47:22.906743 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:22.906751 | orchestrator | 2026-03-01 00:47:22.906758 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-01 00:47:22.906764 | orchestrator | Sunday 01 March 2026 00:47:18 +0000 (0:00:00.501) 0:00:21.913 ********** 2026-03-01 00:47:22.906771 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:22.906777 | orchestrator | 2026-03-01 00:47:22.906784 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-01 00:47:22.906791 | orchestrator | Sunday 01 March 2026 00:47:19 +0000 (0:00:00.502) 0:00:22.415 ********** 2026-03-01 00:47:22.906797 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:47:22.906804 | orchestrator | 2026-03-01 00:47:22.906811 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-01 00:47:22.906818 | orchestrator | Sunday 01 March 2026 00:47:19 +0000 (0:00:00.138) 0:00:22.553 ********** 2026-03-01 00:47:22.906849 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'vg_name': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'}) 2026-03-01 00:47:22.906858 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'vg_name': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}) 2026-03-01 00:47:22.906865 | orchestrator | 2026-03-01 00:47:22.906872 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-01 00:47:22.906878 | orchestrator | Sunday 01 March 2026 00:47:19 +0000 (0:00:00.160) 0:00:22.713 ********** 2026-03-01 00:47:22.906899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:22.906907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:22.906913 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:22.906922 | orchestrator | 2026-03-01 00:47:22.906930 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-01 00:47:22.906938 | orchestrator | Sunday 01 March 2026 00:47:20 +0000 (0:00:00.316) 0:00:23.030 ********** 2026-03-01 00:47:22.906945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:22.906954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:22.906961 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:22.906969 | orchestrator | 2026-03-01 00:47:22.906976 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-01 00:47:22.906985 | orchestrator | Sunday 01 March 2026 00:47:20 +0000 (0:00:00.187) 0:00:23.217 ********** 2026-03-01 00:47:22.906992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'})  2026-03-01 00:47:22.907000 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'})  2026-03-01 00:47:22.907007 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:47:22.907013 | orchestrator | 2026-03-01 00:47:22.907020 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-01 00:47:22.907027 | orchestrator | Sunday 01 March 2026 00:47:20 +0000 (0:00:00.146) 0:00:23.364 ********** 2026-03-01 00:47:22.907049 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 00:47:22.907057 | orchestrator |  "lvm_report": { 2026-03-01 00:47:22.907064 | orchestrator |  "lv": [ 2026-03-01 00:47:22.907071 | orchestrator |  { 2026-03-01 00:47:22.907078 | orchestrator |  "lv_name": "osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272", 2026-03-01 00:47:22.907085 | orchestrator |  "vg_name": "ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272" 2026-03-01 00:47:22.907092 | orchestrator |  }, 2026-03-01 00:47:22.907099 | orchestrator |  { 2026-03-01 00:47:22.907106 | orchestrator |  "lv_name": "osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3", 2026-03-01 00:47:22.907112 | orchestrator |  "vg_name": "ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3" 2026-03-01 00:47:22.907119 | orchestrator |  } 2026-03-01 00:47:22.907125 | orchestrator |  ], 2026-03-01 00:47:22.907132 | orchestrator |  "pv": [ 2026-03-01 00:47:22.907138 | orchestrator |  { 2026-03-01 00:47:22.907145 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-01 00:47:22.907151 | orchestrator |  "vg_name": "ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272" 2026-03-01 00:47:22.907158 | orchestrator |  }, 2026-03-01 00:47:22.907165 | orchestrator |  { 2026-03-01 00:47:22.907178 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-01 00:47:22.907185 | orchestrator |  "vg_name": "ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3" 2026-03-01 00:47:22.907191 | orchestrator |  } 2026-03-01 00:47:22.907198 | orchestrator |  ] 2026-03-01 00:47:22.907205 | orchestrator |  } 2026-03-01 00:47:22.907211 | orchestrator | } 2026-03-01 00:47:22.907218 | orchestrator | 2026-03-01 00:47:22.907225 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-01 00:47:22.907232 | orchestrator | 2026-03-01 00:47:22.907238 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-01 00:47:22.907245 | orchestrator | Sunday 01 March 2026 00:47:20 +0000 (0:00:00.273) 0:00:23.638 ********** 2026-03-01 00:47:22.907251 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-01 00:47:22.907285 | orchestrator | 2026-03-01 00:47:22.907297 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-01 00:47:22.907306 | orchestrator | Sunday 01 March 2026 00:47:20 +0000 (0:00:00.233) 0:00:23.871 ********** 2026-03-01 00:47:22.907317 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:22.907326 | orchestrator | 2026-03-01 00:47:22.907336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907346 | orchestrator | Sunday 01 March 2026 00:47:21 +0000 (0:00:00.218) 0:00:24.089 ********** 2026-03-01 00:47:22.907361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-01 00:47:22.907371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-01 00:47:22.907380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-01 00:47:22.907389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-01 00:47:22.907399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-01 00:47:22.907409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-01 00:47:22.907419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-01 00:47:22.907431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-01 00:47:22.907441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-01 00:47:22.907452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-01 00:47:22.907464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-01 00:47:22.907475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-01 00:47:22.907487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-01 00:47:22.907494 | orchestrator | 2026-03-01 00:47:22.907501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907507 | orchestrator | Sunday 01 March 2026 00:47:21 +0000 (0:00:00.382) 0:00:24.472 ********** 2026-03-01 00:47:22.907514 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:22.907521 | orchestrator | 2026-03-01 00:47:22.907527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907534 | orchestrator | Sunday 01 March 2026 00:47:21 +0000 (0:00:00.184) 0:00:24.657 ********** 2026-03-01 00:47:22.907541 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:22.907547 | orchestrator | 2026-03-01 00:47:22.907554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907561 | orchestrator | Sunday 01 March 2026 00:47:21 +0000 (0:00:00.185) 0:00:24.842 ********** 2026-03-01 00:47:22.907567 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:22.907574 | orchestrator | 2026-03-01 00:47:22.907581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907595 | orchestrator | Sunday 01 March 2026 00:47:22 +0000 (0:00:00.484) 0:00:25.326 ********** 2026-03-01 00:47:22.907602 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:22.907609 | orchestrator | 2026-03-01 00:47:22.907616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907622 | orchestrator | Sunday 01 March 2026 00:47:22 +0000 (0:00:00.196) 0:00:25.523 ********** 2026-03-01 00:47:22.907629 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:22.907640 | orchestrator | 2026-03-01 00:47:22.907651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:22.907669 | orchestrator | Sunday 01 March 2026 00:47:22 +0000 (0:00:00.201) 0:00:25.724 ********** 2026-03-01 00:47:22.907681 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:22.907692 | orchestrator | 2026-03-01 00:47:22.907711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444493 | orchestrator | Sunday 01 March 2026 00:47:22 +0000 (0:00:00.198) 0:00:25.923 ********** 2026-03-01 00:47:33.444568 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.444575 | orchestrator | 2026-03-01 00:47:33.444580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444584 | orchestrator | Sunday 01 March 2026 00:47:23 +0000 (0:00:00.194) 0:00:26.118 ********** 2026-03-01 00:47:33.444591 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.444597 | orchestrator | 2026-03-01 00:47:33.444604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444612 | orchestrator | Sunday 01 March 2026 00:47:23 +0000 (0:00:00.201) 0:00:26.319 ********** 2026-03-01 00:47:33.444619 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060) 2026-03-01 00:47:33.444627 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060) 2026-03-01 00:47:33.444634 | orchestrator | 2026-03-01 00:47:33.444640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444647 | orchestrator | Sunday 01 March 2026 00:47:23 +0000 (0:00:00.392) 0:00:26.712 ********** 2026-03-01 00:47:33.444654 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389) 2026-03-01 00:47:33.444661 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389) 2026-03-01 00:47:33.444668 | orchestrator | 2026-03-01 00:47:33.444675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444683 | orchestrator | Sunday 01 March 2026 00:47:24 +0000 (0:00:00.417) 0:00:27.129 ********** 2026-03-01 00:47:33.444690 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6) 2026-03-01 00:47:33.444697 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6) 2026-03-01 00:47:33.444704 | orchestrator | 2026-03-01 00:47:33.444711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444719 | orchestrator | Sunday 01 March 2026 00:47:24 +0000 (0:00:00.407) 0:00:27.537 ********** 2026-03-01 00:47:33.444741 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c) 2026-03-01 00:47:33.444750 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c) 2026-03-01 00:47:33.444758 | orchestrator | 2026-03-01 00:47:33.444765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:33.444772 | orchestrator | Sunday 01 March 2026 00:47:25 +0000 (0:00:00.571) 0:00:28.108 ********** 2026-03-01 00:47:33.444779 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-01 00:47:33.444786 | orchestrator | 2026-03-01 00:47:33.444793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.444799 | orchestrator | Sunday 01 March 2026 00:47:25 +0000 (0:00:00.480) 0:00:28.589 ********** 2026-03-01 00:47:33.444825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-01 00:47:33.444833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-01 00:47:33.444840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-01 00:47:33.444846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-01 00:47:33.444853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-01 00:47:33.444859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-01 00:47:33.444867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-01 00:47:33.444874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-01 00:47:33.444881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-01 00:47:33.444888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-01 00:47:33.444896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-01 00:47:33.444903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-01 00:47:33.444911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-01 00:47:33.444919 | orchestrator | 2026-03-01 00:47:33.444927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.444934 | orchestrator | Sunday 01 March 2026 00:47:26 +0000 (0:00:00.721) 0:00:29.310 ********** 2026-03-01 00:47:33.444941 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.444949 | orchestrator | 2026-03-01 00:47:33.444956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.444962 | orchestrator | Sunday 01 March 2026 00:47:26 +0000 (0:00:00.179) 0:00:29.490 ********** 2026-03-01 00:47:33.444969 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.444973 | orchestrator | 2026-03-01 00:47:33.444977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.444982 | orchestrator | Sunday 01 March 2026 00:47:26 +0000 (0:00:00.171) 0:00:29.661 ********** 2026-03-01 00:47:33.444988 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.444994 | orchestrator | 2026-03-01 00:47:33.445017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445024 | orchestrator | Sunday 01 March 2026 00:47:26 +0000 (0:00:00.258) 0:00:29.920 ********** 2026-03-01 00:47:33.445031 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445037 | orchestrator | 2026-03-01 00:47:33.445043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445050 | orchestrator | Sunday 01 March 2026 00:47:27 +0000 (0:00:00.193) 0:00:30.114 ********** 2026-03-01 00:47:33.445057 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445064 | orchestrator | 2026-03-01 00:47:33.445071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445077 | orchestrator | Sunday 01 March 2026 00:47:27 +0000 (0:00:00.191) 0:00:30.306 ********** 2026-03-01 00:47:33.445083 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445091 | orchestrator | 2026-03-01 00:47:33.445095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445099 | orchestrator | Sunday 01 March 2026 00:47:27 +0000 (0:00:00.207) 0:00:30.514 ********** 2026-03-01 00:47:33.445104 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445108 | orchestrator | 2026-03-01 00:47:33.445113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445117 | orchestrator | Sunday 01 March 2026 00:47:27 +0000 (0:00:00.191) 0:00:30.706 ********** 2026-03-01 00:47:33.445129 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445134 | orchestrator | 2026-03-01 00:47:33.445138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445142 | orchestrator | Sunday 01 March 2026 00:47:27 +0000 (0:00:00.187) 0:00:30.893 ********** 2026-03-01 00:47:33.445147 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-01 00:47:33.445151 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-01 00:47:33.445156 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-01 00:47:33.445160 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-01 00:47:33.445165 | orchestrator | 2026-03-01 00:47:33.445169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445173 | orchestrator | Sunday 01 March 2026 00:47:28 +0000 (0:00:00.780) 0:00:31.674 ********** 2026-03-01 00:47:33.445178 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445182 | orchestrator | 2026-03-01 00:47:33.445187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445203 | orchestrator | Sunday 01 March 2026 00:47:28 +0000 (0:00:00.208) 0:00:31.882 ********** 2026-03-01 00:47:33.445208 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445212 | orchestrator | 2026-03-01 00:47:33.445223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445235 | orchestrator | Sunday 01 March 2026 00:47:29 +0000 (0:00:00.537) 0:00:32.419 ********** 2026-03-01 00:47:33.445240 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445244 | orchestrator | 2026-03-01 00:47:33.445248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:33.445252 | orchestrator | Sunday 01 March 2026 00:47:29 +0000 (0:00:00.190) 0:00:32.610 ********** 2026-03-01 00:47:33.445256 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445259 | orchestrator | 2026-03-01 00:47:33.445263 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-01 00:47:33.445267 | orchestrator | Sunday 01 March 2026 00:47:29 +0000 (0:00:00.200) 0:00:32.810 ********** 2026-03-01 00:47:33.445271 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445275 | orchestrator | 2026-03-01 00:47:33.445293 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-01 00:47:33.445297 | orchestrator | Sunday 01 March 2026 00:47:29 +0000 (0:00:00.117) 0:00:32.927 ********** 2026-03-01 00:47:33.445301 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '024d169c-08bb-513a-b447-fe5a7c318e63'}}) 2026-03-01 00:47:33.445305 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b33a93dc-e50a-56e8-9161-d310a7d41007'}}) 2026-03-01 00:47:33.445309 | orchestrator | 2026-03-01 00:47:33.445313 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-01 00:47:33.445317 | orchestrator | Sunday 01 March 2026 00:47:30 +0000 (0:00:00.173) 0:00:33.101 ********** 2026-03-01 00:47:33.445322 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'}) 2026-03-01 00:47:33.445327 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'}) 2026-03-01 00:47:33.445331 | orchestrator | 2026-03-01 00:47:33.445335 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-01 00:47:33.445339 | orchestrator | Sunday 01 March 2026 00:47:31 +0000 (0:00:01.827) 0:00:34.929 ********** 2026-03-01 00:47:33.445343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:33.445348 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:33.445357 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:33.445361 | orchestrator | 2026-03-01 00:47:33.445365 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-01 00:47:33.445369 | orchestrator | Sunday 01 March 2026 00:47:32 +0000 (0:00:00.139) 0:00:35.068 ********** 2026-03-01 00:47:33.445372 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'}) 2026-03-01 00:47:33.445381 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'}) 2026-03-01 00:47:38.903186 | orchestrator | 2026-03-01 00:47:38.903270 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-01 00:47:38.903281 | orchestrator | Sunday 01 March 2026 00:47:33 +0000 (0:00:01.504) 0:00:36.573 ********** 2026-03-01 00:47:38.903330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903348 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903355 | orchestrator | 2026-03-01 00:47:38.903362 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-01 00:47:38.903368 | orchestrator | Sunday 01 March 2026 00:47:33 +0000 (0:00:00.133) 0:00:36.707 ********** 2026-03-01 00:47:38.903374 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903379 | orchestrator | 2026-03-01 00:47:38.903385 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-01 00:47:38.903392 | orchestrator | Sunday 01 March 2026 00:47:33 +0000 (0:00:00.144) 0:00:36.851 ********** 2026-03-01 00:47:38.903398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903410 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903417 | orchestrator | 2026-03-01 00:47:38.903423 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-01 00:47:38.903429 | orchestrator | Sunday 01 March 2026 00:47:33 +0000 (0:00:00.120) 0:00:36.972 ********** 2026-03-01 00:47:38.903435 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903440 | orchestrator | 2026-03-01 00:47:38.903447 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-01 00:47:38.903470 | orchestrator | Sunday 01 March 2026 00:47:34 +0000 (0:00:00.114) 0:00:37.086 ********** 2026-03-01 00:47:38.903477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903491 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903498 | orchestrator | 2026-03-01 00:47:38.903505 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-01 00:47:38.903511 | orchestrator | Sunday 01 March 2026 00:47:34 +0000 (0:00:00.280) 0:00:37.366 ********** 2026-03-01 00:47:38.903518 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903525 | orchestrator | 2026-03-01 00:47:38.903532 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-01 00:47:38.903539 | orchestrator | Sunday 01 March 2026 00:47:34 +0000 (0:00:00.131) 0:00:37.498 ********** 2026-03-01 00:47:38.903545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903578 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903585 | orchestrator | 2026-03-01 00:47:38.903590 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-01 00:47:38.903596 | orchestrator | Sunday 01 March 2026 00:47:34 +0000 (0:00:00.176) 0:00:37.675 ********** 2026-03-01 00:47:38.903602 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:38.903609 | orchestrator | 2026-03-01 00:47:38.903614 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-01 00:47:38.903620 | orchestrator | Sunday 01 March 2026 00:47:34 +0000 (0:00:00.129) 0:00:37.804 ********** 2026-03-01 00:47:38.903626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903638 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903644 | orchestrator | 2026-03-01 00:47:38.903650 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-01 00:47:38.903656 | orchestrator | Sunday 01 March 2026 00:47:34 +0000 (0:00:00.185) 0:00:37.989 ********** 2026-03-01 00:47:38.903662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903675 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903681 | orchestrator | 2026-03-01 00:47:38.903688 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-01 00:47:38.903710 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.164) 0:00:38.154 ********** 2026-03-01 00:47:38.903716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:38.903723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:38.903729 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903735 | orchestrator | 2026-03-01 00:47:38.903742 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-01 00:47:38.903748 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.141) 0:00:38.295 ********** 2026-03-01 00:47:38.903755 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903761 | orchestrator | 2026-03-01 00:47:38.903767 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-01 00:47:38.903774 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.127) 0:00:38.422 ********** 2026-03-01 00:47:38.903780 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903787 | orchestrator | 2026-03-01 00:47:38.903794 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-01 00:47:38.903800 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.130) 0:00:38.553 ********** 2026-03-01 00:47:38.903807 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.903814 | orchestrator | 2026-03-01 00:47:38.903820 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-01 00:47:38.903827 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.134) 0:00:38.687 ********** 2026-03-01 00:47:38.903834 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 00:47:38.903840 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-01 00:47:38.903853 | orchestrator | } 2026-03-01 00:47:38.903860 | orchestrator | 2026-03-01 00:47:38.903866 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-01 00:47:38.903872 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.133) 0:00:38.821 ********** 2026-03-01 00:47:38.903878 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 00:47:38.903884 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-01 00:47:38.903891 | orchestrator | } 2026-03-01 00:47:38.903897 | orchestrator | 2026-03-01 00:47:38.903908 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-01 00:47:38.903914 | orchestrator | Sunday 01 March 2026 00:47:35 +0000 (0:00:00.145) 0:00:38.966 ********** 2026-03-01 00:47:38.903921 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 00:47:38.903927 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-01 00:47:38.903934 | orchestrator | } 2026-03-01 00:47:38.903941 | orchestrator | 2026-03-01 00:47:38.903947 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-01 00:47:38.903953 | orchestrator | Sunday 01 March 2026 00:47:36 +0000 (0:00:00.259) 0:00:39.226 ********** 2026-03-01 00:47:38.903959 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:38.903965 | orchestrator | 2026-03-01 00:47:38.903971 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-01 00:47:38.903977 | orchestrator | Sunday 01 March 2026 00:47:36 +0000 (0:00:00.533) 0:00:39.759 ********** 2026-03-01 00:47:38.903983 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:38.903988 | orchestrator | 2026-03-01 00:47:38.903994 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-01 00:47:38.904001 | orchestrator | Sunday 01 March 2026 00:47:37 +0000 (0:00:00.614) 0:00:40.374 ********** 2026-03-01 00:47:38.904008 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:38.904014 | orchestrator | 2026-03-01 00:47:38.904021 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-01 00:47:38.904027 | orchestrator | Sunday 01 March 2026 00:47:37 +0000 (0:00:00.542) 0:00:40.916 ********** 2026-03-01 00:47:38.904034 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:38.904040 | orchestrator | 2026-03-01 00:47:38.904047 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-01 00:47:38.904053 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.143) 0:00:41.059 ********** 2026-03-01 00:47:38.904061 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.904067 | orchestrator | 2026-03-01 00:47:38.904074 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-01 00:47:38.904081 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.109) 0:00:41.168 ********** 2026-03-01 00:47:38.904088 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.904095 | orchestrator | 2026-03-01 00:47:38.904102 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-01 00:47:38.904109 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.104) 0:00:41.273 ********** 2026-03-01 00:47:38.904116 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 00:47:38.904123 | orchestrator |  "vgs_report": { 2026-03-01 00:47:38.904129 | orchestrator |  "vg": [] 2026-03-01 00:47:38.904136 | orchestrator |  } 2026-03-01 00:47:38.904143 | orchestrator | } 2026-03-01 00:47:38.904150 | orchestrator | 2026-03-01 00:47:38.904157 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-01 00:47:38.904163 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.128) 0:00:41.402 ********** 2026-03-01 00:47:38.904170 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.904176 | orchestrator | 2026-03-01 00:47:38.904182 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-01 00:47:38.904188 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.141) 0:00:41.543 ********** 2026-03-01 00:47:38.904194 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.904200 | orchestrator | 2026-03-01 00:47:38.904206 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-01 00:47:38.904217 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.125) 0:00:41.669 ********** 2026-03-01 00:47:38.904223 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.904228 | orchestrator | 2026-03-01 00:47:38.904234 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-01 00:47:38.904240 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.119) 0:00:41.788 ********** 2026-03-01 00:47:38.904246 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:38.904252 | orchestrator | 2026-03-01 00:47:38.904264 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-01 00:47:43.327911 | orchestrator | Sunday 01 March 2026 00:47:38 +0000 (0:00:00.132) 0:00:41.920 ********** 2026-03-01 00:47:43.327982 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.327989 | orchestrator | 2026-03-01 00:47:43.327996 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-01 00:47:43.328003 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.263) 0:00:42.183 ********** 2026-03-01 00:47:43.328019 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328026 | orchestrator | 2026-03-01 00:47:43.328032 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-01 00:47:43.328038 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.123) 0:00:42.307 ********** 2026-03-01 00:47:43.328043 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328050 | orchestrator | 2026-03-01 00:47:43.328056 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-01 00:47:43.328062 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.130) 0:00:42.437 ********** 2026-03-01 00:47:43.328068 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328074 | orchestrator | 2026-03-01 00:47:43.328080 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-01 00:47:43.328086 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.138) 0:00:42.576 ********** 2026-03-01 00:47:43.328092 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328098 | orchestrator | 2026-03-01 00:47:43.328104 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-01 00:47:43.328110 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.149) 0:00:42.725 ********** 2026-03-01 00:47:43.328116 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328123 | orchestrator | 2026-03-01 00:47:43.328129 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-01 00:47:43.328136 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.125) 0:00:42.851 ********** 2026-03-01 00:47:43.328142 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328148 | orchestrator | 2026-03-01 00:47:43.328154 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-01 00:47:43.328160 | orchestrator | Sunday 01 March 2026 00:47:39 +0000 (0:00:00.118) 0:00:42.969 ********** 2026-03-01 00:47:43.328166 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328172 | orchestrator | 2026-03-01 00:47:43.328178 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-01 00:47:43.328186 | orchestrator | Sunday 01 March 2026 00:47:40 +0000 (0:00:00.134) 0:00:43.104 ********** 2026-03-01 00:47:43.328193 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328197 | orchestrator | 2026-03-01 00:47:43.328201 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-01 00:47:43.328205 | orchestrator | Sunday 01 March 2026 00:47:40 +0000 (0:00:00.132) 0:00:43.236 ********** 2026-03-01 00:47:43.328209 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328213 | orchestrator | 2026-03-01 00:47:43.328218 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-01 00:47:43.328222 | orchestrator | Sunday 01 March 2026 00:47:40 +0000 (0:00:00.143) 0:00:43.380 ********** 2026-03-01 00:47:43.328226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328269 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328273 | orchestrator | 2026-03-01 00:47:43.328277 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-01 00:47:43.328281 | orchestrator | Sunday 01 March 2026 00:47:40 +0000 (0:00:00.162) 0:00:43.542 ********** 2026-03-01 00:47:43.328285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328344 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328353 | orchestrator | 2026-03-01 00:47:43.328359 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-01 00:47:43.328365 | orchestrator | Sunday 01 March 2026 00:47:40 +0000 (0:00:00.140) 0:00:43.683 ********** 2026-03-01 00:47:43.328374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328388 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328394 | orchestrator | 2026-03-01 00:47:43.328399 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-01 00:47:43.328406 | orchestrator | Sunday 01 March 2026 00:47:40 +0000 (0:00:00.281) 0:00:43.964 ********** 2026-03-01 00:47:43.328412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328425 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328431 | orchestrator | 2026-03-01 00:47:43.328453 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-01 00:47:43.328458 | orchestrator | Sunday 01 March 2026 00:47:41 +0000 (0:00:00.140) 0:00:44.105 ********** 2026-03-01 00:47:43.328462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328472 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328477 | orchestrator | 2026-03-01 00:47:43.328481 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-01 00:47:43.328486 | orchestrator | Sunday 01 March 2026 00:47:41 +0000 (0:00:00.150) 0:00:44.255 ********** 2026-03-01 00:47:43.328491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328500 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328504 | orchestrator | 2026-03-01 00:47:43.328509 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-01 00:47:43.328513 | orchestrator | Sunday 01 March 2026 00:47:41 +0000 (0:00:00.149) 0:00:44.405 ********** 2026-03-01 00:47:43.328518 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328541 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328545 | orchestrator | 2026-03-01 00:47:43.328548 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-01 00:47:43.328552 | orchestrator | Sunday 01 March 2026 00:47:41 +0000 (0:00:00.151) 0:00:44.557 ********** 2026-03-01 00:47:43.328556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328560 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328563 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328567 | orchestrator | 2026-03-01 00:47:43.328571 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-01 00:47:43.328575 | orchestrator | Sunday 01 March 2026 00:47:41 +0000 (0:00:00.187) 0:00:44.744 ********** 2026-03-01 00:47:43.328579 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:43.328583 | orchestrator | 2026-03-01 00:47:43.328587 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-01 00:47:43.328590 | orchestrator | Sunday 01 March 2026 00:47:42 +0000 (0:00:00.562) 0:00:45.307 ********** 2026-03-01 00:47:43.328594 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:43.328598 | orchestrator | 2026-03-01 00:47:43.328602 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-01 00:47:43.328605 | orchestrator | Sunday 01 March 2026 00:47:42 +0000 (0:00:00.539) 0:00:45.846 ********** 2026-03-01 00:47:43.328609 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:47:43.328613 | orchestrator | 2026-03-01 00:47:43.328617 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-01 00:47:43.328620 | orchestrator | Sunday 01 March 2026 00:47:42 +0000 (0:00:00.143) 0:00:45.990 ********** 2026-03-01 00:47:43.328624 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'vg_name': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'}) 2026-03-01 00:47:43.328629 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'vg_name': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'}) 2026-03-01 00:47:43.328633 | orchestrator | 2026-03-01 00:47:43.328637 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-01 00:47:43.328641 | orchestrator | Sunday 01 March 2026 00:47:43 +0000 (0:00:00.160) 0:00:46.150 ********** 2026-03-01 00:47:43.328644 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:43.328652 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:43.328656 | orchestrator | 2026-03-01 00:47:43.328659 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-01 00:47:43.328663 | orchestrator | Sunday 01 March 2026 00:47:43 +0000 (0:00:00.141) 0:00:46.292 ********** 2026-03-01 00:47:43.328667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:43.328673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:48.945021 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:48.945135 | orchestrator | 2026-03-01 00:47:48.945167 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-01 00:47:48.945207 | orchestrator | Sunday 01 March 2026 00:47:43 +0000 (0:00:00.127) 0:00:46.420 ********** 2026-03-01 00:47:48.945216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'})  2026-03-01 00:47:48.945226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'})  2026-03-01 00:47:48.945233 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:47:48.945240 | orchestrator | 2026-03-01 00:47:48.945246 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-01 00:47:48.945253 | orchestrator | Sunday 01 March 2026 00:47:43 +0000 (0:00:00.141) 0:00:46.561 ********** 2026-03-01 00:47:48.945260 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 00:47:48.945267 | orchestrator |  "lvm_report": { 2026-03-01 00:47:48.945275 | orchestrator |  "lv": [ 2026-03-01 00:47:48.945282 | orchestrator |  { 2026-03-01 00:47:48.945291 | orchestrator |  "lv_name": "osd-block-024d169c-08bb-513a-b447-fe5a7c318e63", 2026-03-01 00:47:48.945299 | orchestrator |  "vg_name": "ceph-024d169c-08bb-513a-b447-fe5a7c318e63" 2026-03-01 00:47:48.945353 | orchestrator |  }, 2026-03-01 00:47:48.945359 | orchestrator |  { 2026-03-01 00:47:48.945365 | orchestrator |  "lv_name": "osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007", 2026-03-01 00:47:48.945371 | orchestrator |  "vg_name": "ceph-b33a93dc-e50a-56e8-9161-d310a7d41007" 2026-03-01 00:47:48.945377 | orchestrator |  } 2026-03-01 00:47:48.945383 | orchestrator |  ], 2026-03-01 00:47:48.945389 | orchestrator |  "pv": [ 2026-03-01 00:47:48.945394 | orchestrator |  { 2026-03-01 00:47:48.945400 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-01 00:47:48.945421 | orchestrator |  "vg_name": "ceph-024d169c-08bb-513a-b447-fe5a7c318e63" 2026-03-01 00:47:48.945429 | orchestrator |  }, 2026-03-01 00:47:48.945435 | orchestrator |  { 2026-03-01 00:47:48.945441 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-01 00:47:48.945448 | orchestrator |  "vg_name": "ceph-b33a93dc-e50a-56e8-9161-d310a7d41007" 2026-03-01 00:47:48.945454 | orchestrator |  } 2026-03-01 00:47:48.945460 | orchestrator |  ] 2026-03-01 00:47:48.945467 | orchestrator |  } 2026-03-01 00:47:48.945475 | orchestrator | } 2026-03-01 00:47:48.945482 | orchestrator | 2026-03-01 00:47:48.945489 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-01 00:47:48.945497 | orchestrator | 2026-03-01 00:47:48.945504 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-01 00:47:48.945511 | orchestrator | Sunday 01 March 2026 00:47:43 +0000 (0:00:00.424) 0:00:46.985 ********** 2026-03-01 00:47:48.945517 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-01 00:47:48.945524 | orchestrator | 2026-03-01 00:47:48.945530 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-01 00:47:48.945536 | orchestrator | Sunday 01 March 2026 00:47:44 +0000 (0:00:00.229) 0:00:47.215 ********** 2026-03-01 00:47:48.945543 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:47:48.945549 | orchestrator | 2026-03-01 00:47:48.945555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945561 | orchestrator | Sunday 01 March 2026 00:47:44 +0000 (0:00:00.221) 0:00:47.436 ********** 2026-03-01 00:47:48.945568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-01 00:47:48.945575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-01 00:47:48.945581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-01 00:47:48.945587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-01 00:47:48.945604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-01 00:47:48.945611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-01 00:47:48.945618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-01 00:47:48.945624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-01 00:47:48.945630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-01 00:47:48.945642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-01 00:47:48.945649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-01 00:47:48.945654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-01 00:47:48.945660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-01 00:47:48.945665 | orchestrator | 2026-03-01 00:47:48.945672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945677 | orchestrator | Sunday 01 March 2026 00:47:44 +0000 (0:00:00.390) 0:00:47.827 ********** 2026-03-01 00:47:48.945683 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945689 | orchestrator | 2026-03-01 00:47:48.945694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945700 | orchestrator | Sunday 01 March 2026 00:47:45 +0000 (0:00:00.254) 0:00:48.081 ********** 2026-03-01 00:47:48.945706 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945711 | orchestrator | 2026-03-01 00:47:48.945718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945745 | orchestrator | Sunday 01 March 2026 00:47:45 +0000 (0:00:00.180) 0:00:48.261 ********** 2026-03-01 00:47:48.945752 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945759 | orchestrator | 2026-03-01 00:47:48.945765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945772 | orchestrator | Sunday 01 March 2026 00:47:45 +0000 (0:00:00.175) 0:00:48.437 ********** 2026-03-01 00:47:48.945779 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945786 | orchestrator | 2026-03-01 00:47:48.945792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945797 | orchestrator | Sunday 01 March 2026 00:47:45 +0000 (0:00:00.180) 0:00:48.618 ********** 2026-03-01 00:47:48.945804 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945810 | orchestrator | 2026-03-01 00:47:48.945817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945823 | orchestrator | Sunday 01 March 2026 00:47:46 +0000 (0:00:00.496) 0:00:49.114 ********** 2026-03-01 00:47:48.945829 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945835 | orchestrator | 2026-03-01 00:47:48.945841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945848 | orchestrator | Sunday 01 March 2026 00:47:46 +0000 (0:00:00.183) 0:00:49.298 ********** 2026-03-01 00:47:48.945854 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945861 | orchestrator | 2026-03-01 00:47:48.945868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945875 | orchestrator | Sunday 01 March 2026 00:47:46 +0000 (0:00:00.198) 0:00:49.497 ********** 2026-03-01 00:47:48.945881 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:48.945887 | orchestrator | 2026-03-01 00:47:48.945893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945900 | orchestrator | Sunday 01 March 2026 00:47:46 +0000 (0:00:00.179) 0:00:49.676 ********** 2026-03-01 00:47:48.945906 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e) 2026-03-01 00:47:48.945914 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e) 2026-03-01 00:47:48.945930 | orchestrator | 2026-03-01 00:47:48.945936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945942 | orchestrator | Sunday 01 March 2026 00:47:47 +0000 (0:00:00.397) 0:00:50.074 ********** 2026-03-01 00:47:48.945949 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa) 2026-03-01 00:47:48.945955 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa) 2026-03-01 00:47:48.945961 | orchestrator | 2026-03-01 00:47:48.945966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.945972 | orchestrator | Sunday 01 March 2026 00:47:47 +0000 (0:00:00.385) 0:00:50.459 ********** 2026-03-01 00:47:48.945978 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7) 2026-03-01 00:47:48.945984 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7) 2026-03-01 00:47:48.945990 | orchestrator | 2026-03-01 00:47:48.945996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.946003 | orchestrator | Sunday 01 March 2026 00:47:47 +0000 (0:00:00.393) 0:00:50.852 ********** 2026-03-01 00:47:48.946009 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c) 2026-03-01 00:47:48.946051 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c) 2026-03-01 00:47:48.946061 | orchestrator | 2026-03-01 00:47:48.946068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-01 00:47:48.946077 | orchestrator | Sunday 01 March 2026 00:47:48 +0000 (0:00:00.401) 0:00:51.254 ********** 2026-03-01 00:47:48.946084 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-01 00:47:48.946090 | orchestrator | 2026-03-01 00:47:48.946097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:48.946103 | orchestrator | Sunday 01 March 2026 00:47:48 +0000 (0:00:00.347) 0:00:51.601 ********** 2026-03-01 00:47:48.946115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-01 00:47:48.946123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-01 00:47:48.946130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-01 00:47:48.946136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-01 00:47:48.946143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-01 00:47:48.946151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-01 00:47:48.946158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-01 00:47:48.946165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-01 00:47:48.946173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-01 00:47:48.946180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-01 00:47:48.946187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-01 00:47:48.946204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-01 00:47:57.154787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-01 00:47:57.154852 | orchestrator | 2026-03-01 00:47:57.154859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.154864 | orchestrator | Sunday 01 March 2026 00:47:49 +0000 (0:00:00.438) 0:00:52.039 ********** 2026-03-01 00:47:57.154878 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.154883 | orchestrator | 2026-03-01 00:47:57.154887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.154891 | orchestrator | Sunday 01 March 2026 00:47:49 +0000 (0:00:00.189) 0:00:52.229 ********** 2026-03-01 00:47:57.154895 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.154898 | orchestrator | 2026-03-01 00:47:57.154933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.154939 | orchestrator | Sunday 01 March 2026 00:47:49 +0000 (0:00:00.495) 0:00:52.724 ********** 2026-03-01 00:47:57.154943 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.154947 | orchestrator | 2026-03-01 00:47:57.154950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.154954 | orchestrator | Sunday 01 March 2026 00:47:49 +0000 (0:00:00.186) 0:00:52.910 ********** 2026-03-01 00:47:57.154958 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.154962 | orchestrator | 2026-03-01 00:47:57.154967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.154973 | orchestrator | Sunday 01 March 2026 00:47:50 +0000 (0:00:00.187) 0:00:53.098 ********** 2026-03-01 00:47:57.154979 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.154985 | orchestrator | 2026-03-01 00:47:57.154991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.154997 | orchestrator | Sunday 01 March 2026 00:47:50 +0000 (0:00:00.191) 0:00:53.289 ********** 2026-03-01 00:47:57.155004 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155009 | orchestrator | 2026-03-01 00:47:57.155015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155019 | orchestrator | Sunday 01 March 2026 00:47:50 +0000 (0:00:00.193) 0:00:53.483 ********** 2026-03-01 00:47:57.155023 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155027 | orchestrator | 2026-03-01 00:47:57.155030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155034 | orchestrator | Sunday 01 March 2026 00:47:50 +0000 (0:00:00.207) 0:00:53.690 ********** 2026-03-01 00:47:57.155038 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155041 | orchestrator | 2026-03-01 00:47:57.155045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155049 | orchestrator | Sunday 01 March 2026 00:47:50 +0000 (0:00:00.184) 0:00:53.875 ********** 2026-03-01 00:47:57.155053 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-01 00:47:57.155057 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-01 00:47:57.155061 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-01 00:47:57.155064 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-01 00:47:57.155070 | orchestrator | 2026-03-01 00:47:57.155076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155083 | orchestrator | Sunday 01 March 2026 00:47:51 +0000 (0:00:00.592) 0:00:54.467 ********** 2026-03-01 00:47:57.155089 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155092 | orchestrator | 2026-03-01 00:47:57.155096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155100 | orchestrator | Sunday 01 March 2026 00:47:51 +0000 (0:00:00.187) 0:00:54.654 ********** 2026-03-01 00:47:57.155104 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155107 | orchestrator | 2026-03-01 00:47:57.155111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155117 | orchestrator | Sunday 01 March 2026 00:47:51 +0000 (0:00:00.195) 0:00:54.850 ********** 2026-03-01 00:47:57.155124 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155130 | orchestrator | 2026-03-01 00:47:57.155138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-01 00:47:57.155142 | orchestrator | Sunday 01 March 2026 00:47:52 +0000 (0:00:00.221) 0:00:55.071 ********** 2026-03-01 00:47:57.155150 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155154 | orchestrator | 2026-03-01 00:47:57.155158 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-01 00:47:57.155161 | orchestrator | Sunday 01 March 2026 00:47:52 +0000 (0:00:00.188) 0:00:55.259 ********** 2026-03-01 00:47:57.155165 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155169 | orchestrator | 2026-03-01 00:47:57.155172 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-01 00:47:57.155176 | orchestrator | Sunday 01 March 2026 00:47:52 +0000 (0:00:00.274) 0:00:55.534 ********** 2026-03-01 00:47:57.155180 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}}) 2026-03-01 00:47:57.155184 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd1a7437a-a9c6-5afd-b028-da6f65a62b89'}}) 2026-03-01 00:47:57.155189 | orchestrator | 2026-03-01 00:47:57.155195 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-01 00:47:57.155202 | orchestrator | Sunday 01 March 2026 00:47:52 +0000 (0:00:00.183) 0:00:55.718 ********** 2026-03-01 00:47:57.155209 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}) 2026-03-01 00:47:57.155216 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'}) 2026-03-01 00:47:57.155222 | orchestrator | 2026-03-01 00:47:57.155229 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-01 00:47:57.155246 | orchestrator | Sunday 01 March 2026 00:47:54 +0000 (0:00:01.817) 0:00:57.535 ********** 2026-03-01 00:47:57.155253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:47:57.155261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:47:57.155269 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155275 | orchestrator | 2026-03-01 00:47:57.155281 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-01 00:47:57.155287 | orchestrator | Sunday 01 March 2026 00:47:54 +0000 (0:00:00.139) 0:00:57.675 ********** 2026-03-01 00:47:57.155293 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}) 2026-03-01 00:47:57.155299 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'}) 2026-03-01 00:47:57.155306 | orchestrator | 2026-03-01 00:47:57.155393 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-01 00:47:57.155403 | orchestrator | Sunday 01 March 2026 00:47:55 +0000 (0:00:01.204) 0:00:58.880 ********** 2026-03-01 00:47:57.155411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:47:57.155418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:47:57.155429 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155437 | orchestrator | 2026-03-01 00:47:57.155444 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-01 00:47:57.155450 | orchestrator | Sunday 01 March 2026 00:47:55 +0000 (0:00:00.143) 0:00:59.024 ********** 2026-03-01 00:47:57.155457 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155465 | orchestrator | 2026-03-01 00:47:57.155473 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-01 00:47:57.155480 | orchestrator | Sunday 01 March 2026 00:47:56 +0000 (0:00:00.122) 0:00:59.146 ********** 2026-03-01 00:47:57.155493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:47:57.155500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:47:57.155507 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155514 | orchestrator | 2026-03-01 00:47:57.155521 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-01 00:47:57.155528 | orchestrator | Sunday 01 March 2026 00:47:56 +0000 (0:00:00.155) 0:00:59.302 ********** 2026-03-01 00:47:57.155535 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155542 | orchestrator | 2026-03-01 00:47:57.155547 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-01 00:47:57.155550 | orchestrator | Sunday 01 March 2026 00:47:56 +0000 (0:00:00.110) 0:00:59.412 ********** 2026-03-01 00:47:57.155554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:47:57.155558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:47:57.155562 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155566 | orchestrator | 2026-03-01 00:47:57.155569 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-01 00:47:57.155573 | orchestrator | Sunday 01 March 2026 00:47:56 +0000 (0:00:00.135) 0:00:59.547 ********** 2026-03-01 00:47:57.155577 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155580 | orchestrator | 2026-03-01 00:47:57.155584 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-01 00:47:57.155588 | orchestrator | Sunday 01 March 2026 00:47:56 +0000 (0:00:00.129) 0:00:59.677 ********** 2026-03-01 00:47:57.155591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:47:57.155595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:47:57.155599 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:47:57.155603 | orchestrator | 2026-03-01 00:47:57.155606 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-01 00:47:57.155610 | orchestrator | Sunday 01 March 2026 00:47:56 +0000 (0:00:00.136) 0:00:59.813 ********** 2026-03-01 00:47:57.155614 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:47:57.155618 | orchestrator | 2026-03-01 00:47:57.155622 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-01 00:47:57.155626 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.283) 0:01:00.096 ********** 2026-03-01 00:47:57.155634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:03.000124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:03.000249 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000271 | orchestrator | 2026-03-01 00:48:03.000283 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-01 00:48:03.000291 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.169) 0:01:00.266 ********** 2026-03-01 00:48:03.000297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:03.000304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:03.000395 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000403 | orchestrator | 2026-03-01 00:48:03.000410 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-01 00:48:03.000418 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.147) 0:01:00.413 ********** 2026-03-01 00:48:03.000425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:03.000432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:03.000439 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000446 | orchestrator | 2026-03-01 00:48:03.000453 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-01 00:48:03.000475 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.152) 0:01:00.565 ********** 2026-03-01 00:48:03.000482 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000488 | orchestrator | 2026-03-01 00:48:03.000495 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-01 00:48:03.000503 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.161) 0:01:00.727 ********** 2026-03-01 00:48:03.000508 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000512 | orchestrator | 2026-03-01 00:48:03.000516 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-01 00:48:03.000520 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.143) 0:01:00.871 ********** 2026-03-01 00:48:03.000524 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000527 | orchestrator | 2026-03-01 00:48:03.000531 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-01 00:48:03.000537 | orchestrator | Sunday 01 March 2026 00:47:57 +0000 (0:00:00.139) 0:01:01.010 ********** 2026-03-01 00:48:03.000543 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 00:48:03.000549 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-01 00:48:03.000554 | orchestrator | } 2026-03-01 00:48:03.000563 | orchestrator | 2026-03-01 00:48:03.000572 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-01 00:48:03.000577 | orchestrator | Sunday 01 March 2026 00:47:58 +0000 (0:00:00.146) 0:01:01.156 ********** 2026-03-01 00:48:03.000584 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 00:48:03.000590 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-01 00:48:03.000595 | orchestrator | } 2026-03-01 00:48:03.000601 | orchestrator | 2026-03-01 00:48:03.000606 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-01 00:48:03.000612 | orchestrator | Sunday 01 March 2026 00:47:58 +0000 (0:00:00.122) 0:01:01.279 ********** 2026-03-01 00:48:03.000618 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 00:48:03.000625 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-01 00:48:03.000642 | orchestrator | } 2026-03-01 00:48:03.000648 | orchestrator | 2026-03-01 00:48:03.000655 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-01 00:48:03.000659 | orchestrator | Sunday 01 March 2026 00:47:58 +0000 (0:00:00.175) 0:01:01.454 ********** 2026-03-01 00:48:03.000663 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:03.000667 | orchestrator | 2026-03-01 00:48:03.000671 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-01 00:48:03.000676 | orchestrator | Sunday 01 March 2026 00:47:59 +0000 (0:00:00.579) 0:01:02.033 ********** 2026-03-01 00:48:03.000681 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:03.000688 | orchestrator | 2026-03-01 00:48:03.000693 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-01 00:48:03.000699 | orchestrator | Sunday 01 March 2026 00:47:59 +0000 (0:00:00.561) 0:01:02.595 ********** 2026-03-01 00:48:03.000705 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:03.000722 | orchestrator | 2026-03-01 00:48:03.000728 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-01 00:48:03.000732 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.671) 0:01:03.267 ********** 2026-03-01 00:48:03.000736 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:03.000739 | orchestrator | 2026-03-01 00:48:03.000743 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-01 00:48:03.000747 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.139) 0:01:03.406 ********** 2026-03-01 00:48:03.000751 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000754 | orchestrator | 2026-03-01 00:48:03.000758 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-01 00:48:03.000762 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.098) 0:01:03.505 ********** 2026-03-01 00:48:03.000765 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000769 | orchestrator | 2026-03-01 00:48:03.000773 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-01 00:48:03.000776 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.097) 0:01:03.602 ********** 2026-03-01 00:48:03.000780 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 00:48:03.000784 | orchestrator |  "vgs_report": { 2026-03-01 00:48:03.000788 | orchestrator |  "vg": [] 2026-03-01 00:48:03.000806 | orchestrator |  } 2026-03-01 00:48:03.000810 | orchestrator | } 2026-03-01 00:48:03.000814 | orchestrator | 2026-03-01 00:48:03.000820 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-01 00:48:03.000826 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.127) 0:01:03.730 ********** 2026-03-01 00:48:03.000831 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000837 | orchestrator | 2026-03-01 00:48:03.000842 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-01 00:48:03.000848 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.131) 0:01:03.861 ********** 2026-03-01 00:48:03.000853 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000859 | orchestrator | 2026-03-01 00:48:03.000865 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-01 00:48:03.000870 | orchestrator | Sunday 01 March 2026 00:48:00 +0000 (0:00:00.112) 0:01:03.974 ********** 2026-03-01 00:48:03.000876 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000882 | orchestrator | 2026-03-01 00:48:03.000888 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-01 00:48:03.000894 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.136) 0:01:04.111 ********** 2026-03-01 00:48:03.000900 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000906 | orchestrator | 2026-03-01 00:48:03.000912 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-01 00:48:03.000918 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.128) 0:01:04.239 ********** 2026-03-01 00:48:03.000924 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000930 | orchestrator | 2026-03-01 00:48:03.000937 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-01 00:48:03.000942 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.122) 0:01:04.362 ********** 2026-03-01 00:48:03.000946 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000949 | orchestrator | 2026-03-01 00:48:03.000953 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-01 00:48:03.000958 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.120) 0:01:04.483 ********** 2026-03-01 00:48:03.000961 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000965 | orchestrator | 2026-03-01 00:48:03.000969 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-01 00:48:03.000973 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.132) 0:01:04.615 ********** 2026-03-01 00:48:03.000976 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.000980 | orchestrator | 2026-03-01 00:48:03.000984 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-01 00:48:03.000994 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.266) 0:01:04.882 ********** 2026-03-01 00:48:03.000997 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001001 | orchestrator | 2026-03-01 00:48:03.001005 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-01 00:48:03.001009 | orchestrator | Sunday 01 March 2026 00:48:01 +0000 (0:00:00.131) 0:01:05.014 ********** 2026-03-01 00:48:03.001013 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001016 | orchestrator | 2026-03-01 00:48:03.001020 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-01 00:48:03.001024 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.111) 0:01:05.126 ********** 2026-03-01 00:48:03.001028 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001031 | orchestrator | 2026-03-01 00:48:03.001035 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-01 00:48:03.001039 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.129) 0:01:05.256 ********** 2026-03-01 00:48:03.001042 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001046 | orchestrator | 2026-03-01 00:48:03.001050 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-01 00:48:03.001053 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.134) 0:01:05.390 ********** 2026-03-01 00:48:03.001057 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001061 | orchestrator | 2026-03-01 00:48:03.001065 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-01 00:48:03.001068 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.137) 0:01:05.528 ********** 2026-03-01 00:48:03.001072 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001075 | orchestrator | 2026-03-01 00:48:03.001079 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-01 00:48:03.001083 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.119) 0:01:05.647 ********** 2026-03-01 00:48:03.001087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:03.001091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:03.001095 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001098 | orchestrator | 2026-03-01 00:48:03.001102 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-01 00:48:03.001106 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.145) 0:01:05.793 ********** 2026-03-01 00:48:03.001110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:03.001113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:03.001117 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:03.001121 | orchestrator | 2026-03-01 00:48:03.001124 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-01 00:48:03.001128 | orchestrator | Sunday 01 March 2026 00:48:02 +0000 (0:00:00.164) 0:01:05.957 ********** 2026-03-01 00:48:03.001137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.963282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.963485 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.963507 | orchestrator | 2026-03-01 00:48:05.963525 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-01 00:48:05.963542 | orchestrator | Sunday 01 March 2026 00:48:03 +0000 (0:00:00.146) 0:01:06.104 ********** 2026-03-01 00:48:05.963598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.963617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.963635 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.963652 | orchestrator | 2026-03-01 00:48:05.963668 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-01 00:48:05.963684 | orchestrator | Sunday 01 March 2026 00:48:03 +0000 (0:00:00.151) 0:01:06.256 ********** 2026-03-01 00:48:05.963700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.963724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.963740 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.963754 | orchestrator | 2026-03-01 00:48:05.963770 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-01 00:48:05.963785 | orchestrator | Sunday 01 March 2026 00:48:03 +0000 (0:00:00.145) 0:01:06.401 ********** 2026-03-01 00:48:05.963801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.963816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.963832 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.963848 | orchestrator | 2026-03-01 00:48:05.963864 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-01 00:48:05.963880 | orchestrator | Sunday 01 March 2026 00:48:03 +0000 (0:00:00.290) 0:01:06.692 ********** 2026-03-01 00:48:05.963897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.963914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.963929 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.963946 | orchestrator | 2026-03-01 00:48:05.963963 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-01 00:48:05.963980 | orchestrator | Sunday 01 March 2026 00:48:03 +0000 (0:00:00.153) 0:01:06.845 ********** 2026-03-01 00:48:05.963995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.964012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.964028 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.964045 | orchestrator | 2026-03-01 00:48:05.964062 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-01 00:48:05.964079 | orchestrator | Sunday 01 March 2026 00:48:03 +0000 (0:00:00.132) 0:01:06.977 ********** 2026-03-01 00:48:05.964097 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:05.964115 | orchestrator | 2026-03-01 00:48:05.964132 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-01 00:48:05.964149 | orchestrator | Sunday 01 March 2026 00:48:04 +0000 (0:00:00.514) 0:01:07.491 ********** 2026-03-01 00:48:05.964166 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:05.964182 | orchestrator | 2026-03-01 00:48:05.964199 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-01 00:48:05.964230 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.562) 0:01:08.054 ********** 2026-03-01 00:48:05.964246 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:05.964263 | orchestrator | 2026-03-01 00:48:05.964281 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-01 00:48:05.964298 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.112) 0:01:08.166 ********** 2026-03-01 00:48:05.964316 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'vg_name': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}) 2026-03-01 00:48:05.964354 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'vg_name': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'}) 2026-03-01 00:48:05.964372 | orchestrator | 2026-03-01 00:48:05.964387 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-01 00:48:05.964402 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.166) 0:01:08.333 ********** 2026-03-01 00:48:05.964441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.964457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.964471 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.964487 | orchestrator | 2026-03-01 00:48:05.964500 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-01 00:48:05.964513 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.195) 0:01:08.528 ********** 2026-03-01 00:48:05.964528 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.964541 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.964555 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.964570 | orchestrator | 2026-03-01 00:48:05.964585 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-01 00:48:05.964606 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.146) 0:01:08.674 ********** 2026-03-01 00:48:05.964622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'})  2026-03-01 00:48:05.964647 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'})  2026-03-01 00:48:05.964662 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:05.964681 | orchestrator | 2026-03-01 00:48:05.964702 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-01 00:48:05.964717 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.154) 0:01:08.829 ********** 2026-03-01 00:48:05.964732 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 00:48:05.964746 | orchestrator |  "lvm_report": { 2026-03-01 00:48:05.964760 | orchestrator |  "lv": [ 2026-03-01 00:48:05.964772 | orchestrator |  { 2026-03-01 00:48:05.964787 | orchestrator |  "lv_name": "osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d", 2026-03-01 00:48:05.964802 | orchestrator |  "vg_name": "ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d" 2026-03-01 00:48:05.964818 | orchestrator |  }, 2026-03-01 00:48:05.964833 | orchestrator |  { 2026-03-01 00:48:05.964849 | orchestrator |  "lv_name": "osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89", 2026-03-01 00:48:05.964863 | orchestrator |  "vg_name": "ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89" 2026-03-01 00:48:05.964877 | orchestrator |  } 2026-03-01 00:48:05.964893 | orchestrator |  ], 2026-03-01 00:48:05.964909 | orchestrator |  "pv": [ 2026-03-01 00:48:05.964938 | orchestrator |  { 2026-03-01 00:48:05.964948 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-01 00:48:05.964957 | orchestrator |  "vg_name": "ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d" 2026-03-01 00:48:05.964965 | orchestrator |  }, 2026-03-01 00:48:05.964974 | orchestrator |  { 2026-03-01 00:48:05.964982 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-01 00:48:05.964991 | orchestrator |  "vg_name": "ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89" 2026-03-01 00:48:05.965000 | orchestrator |  } 2026-03-01 00:48:05.965008 | orchestrator |  ] 2026-03-01 00:48:05.965017 | orchestrator |  } 2026-03-01 00:48:05.965025 | orchestrator | } 2026-03-01 00:48:05.965034 | orchestrator | 2026-03-01 00:48:05.965043 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:48:05.965051 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-01 00:48:05.965060 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-01 00:48:05.965069 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-01 00:48:05.965077 | orchestrator | 2026-03-01 00:48:05.965085 | orchestrator | 2026-03-01 00:48:05.965094 | orchestrator | 2026-03-01 00:48:05.965102 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:48:05.965111 | orchestrator | Sunday 01 March 2026 00:48:05 +0000 (0:00:00.139) 0:01:08.968 ********** 2026-03-01 00:48:05.965119 | orchestrator | =============================================================================== 2026-03-01 00:48:05.965128 | orchestrator | Create block VGs -------------------------------------------------------- 5.79s 2026-03-01 00:48:05.965137 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2026-03-01 00:48:05.965145 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-03-01 00:48:05.965155 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-03-01 00:48:05.965169 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.69s 2026-03-01 00:48:05.965184 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-03-01 00:48:05.965199 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-03-01 00:48:05.965213 | orchestrator | Add known partitions to the list of available block devices ------------- 1.54s 2026-03-01 00:48:05.965241 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-03-01 00:48:06.244011 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-01 00:48:06.244069 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2026-03-01 00:48:06.244079 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-01 00:48:06.244086 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-03-01 00:48:06.244092 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-01 00:48:06.244098 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.65s 2026-03-01 00:48:06.244104 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-03-01 00:48:06.244111 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.61s 2026-03-01 00:48:06.244117 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.60s 2026-03-01 00:48:06.244123 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.60s 2026-03-01 00:48:06.244129 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-01 00:48:18.422659 | orchestrator | 2026-03-01 00:48:18 | INFO  | Prepare task for execution of facts. 2026-03-01 00:48:18.504006 | orchestrator | 2026-03-01 00:48:18 | INFO  | Task 126f7f93-c36e-4966-a070-a98b41dd1680 (facts) was prepared for execution. 2026-03-01 00:48:18.504076 | orchestrator | 2026-03-01 00:48:18 | INFO  | It takes a moment until task 126f7f93-c36e-4966-a070-a98b41dd1680 (facts) has been started and output is visible here. 2026-03-01 00:48:29.971516 | orchestrator | 2026-03-01 00:48:29.971569 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-01 00:48:29.971575 | orchestrator | 2026-03-01 00:48:29.971580 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-01 00:48:29.971584 | orchestrator | Sunday 01 March 2026 00:48:22 +0000 (0:00:00.293) 0:00:00.293 ********** 2026-03-01 00:48:29.971587 | orchestrator | ok: [testbed-manager] 2026-03-01 00:48:29.971592 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:48:29.971595 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:48:29.971599 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:48:29.971603 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:48:29.971607 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:48:29.971610 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:29.971614 | orchestrator | 2026-03-01 00:48:29.971617 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-01 00:48:29.971621 | orchestrator | Sunday 01 March 2026 00:48:23 +0000 (0:00:01.124) 0:00:01.417 ********** 2026-03-01 00:48:29.971625 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:48:29.971629 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:48:29.971632 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:48:29.971636 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:48:29.971639 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:48:29.971643 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:48:29.971646 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:29.971650 | orchestrator | 2026-03-01 00:48:29.971654 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-01 00:48:29.971657 | orchestrator | 2026-03-01 00:48:29.971661 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-01 00:48:29.971664 | orchestrator | Sunday 01 March 2026 00:48:25 +0000 (0:00:01.232) 0:00:02.650 ********** 2026-03-01 00:48:29.971668 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:48:29.971672 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:48:29.971675 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:48:29.971679 | orchestrator | ok: [testbed-manager] 2026-03-01 00:48:29.971682 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:48:29.971686 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:48:29.971690 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:48:29.971696 | orchestrator | 2026-03-01 00:48:29.971701 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-01 00:48:29.971710 | orchestrator | 2026-03-01 00:48:29.971718 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-01 00:48:29.971723 | orchestrator | Sunday 01 March 2026 00:48:29 +0000 (0:00:03.984) 0:00:06.634 ********** 2026-03-01 00:48:29.971730 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:48:29.971735 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:48:29.971741 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:48:29.971746 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:48:29.971752 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:48:29.971758 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:48:29.971763 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:48:29.971769 | orchestrator | 2026-03-01 00:48:29.971775 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:48:29.971781 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971795 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971820 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971824 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971828 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971831 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971835 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:48:29.971839 | orchestrator | 2026-03-01 00:48:29.971842 | orchestrator | 2026-03-01 00:48:29.971846 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:48:29.971849 | orchestrator | Sunday 01 March 2026 00:48:29 +0000 (0:00:00.527) 0:00:07.162 ********** 2026-03-01 00:48:29.971853 | orchestrator | =============================================================================== 2026-03-01 00:48:29.971857 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.98s 2026-03-01 00:48:29.971860 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-03-01 00:48:29.971864 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-03-01 00:48:29.971870 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-03-01 00:48:42.280154 | orchestrator | 2026-03-01 00:48:42 | INFO  | Prepare task for execution of frr. 2026-03-01 00:48:42.360095 | orchestrator | 2026-03-01 00:48:42 | INFO  | Task ca90e9ee-20cd-4b25-b4f6-008aeee2256d (frr) was prepared for execution. 2026-03-01 00:48:42.360148 | orchestrator | 2026-03-01 00:48:42 | INFO  | It takes a moment until task ca90e9ee-20cd-4b25-b4f6-008aeee2256d (frr) has been started and output is visible here. 2026-03-01 00:49:08.108510 | orchestrator | 2026-03-01 00:49:08.108617 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-01 00:49:08.108631 | orchestrator | 2026-03-01 00:49:08.108638 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-01 00:49:08.108645 | orchestrator | Sunday 01 March 2026 00:48:47 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-03-01 00:49:08.108651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-01 00:49:08.108659 | orchestrator | 2026-03-01 00:49:08.108665 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-01 00:49:08.108671 | orchestrator | Sunday 01 March 2026 00:48:47 +0000 (0:00:00.194) 0:00:00.438 ********** 2026-03-01 00:49:08.108686 | orchestrator | changed: [testbed-manager] 2026-03-01 00:49:08.108693 | orchestrator | 2026-03-01 00:49:08.108700 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-01 00:49:08.108707 | orchestrator | Sunday 01 March 2026 00:48:48 +0000 (0:00:01.180) 0:00:01.618 ********** 2026-03-01 00:49:08.108713 | orchestrator | changed: [testbed-manager] 2026-03-01 00:49:08.108717 | orchestrator | 2026-03-01 00:49:08.108721 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-01 00:49:08.108725 | orchestrator | Sunday 01 March 2026 00:48:57 +0000 (0:00:09.422) 0:00:11.041 ********** 2026-03-01 00:49:08.108729 | orchestrator | ok: [testbed-manager] 2026-03-01 00:49:08.108734 | orchestrator | 2026-03-01 00:49:08.108740 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-01 00:49:08.108747 | orchestrator | Sunday 01 March 2026 00:48:59 +0000 (0:00:01.034) 0:00:12.075 ********** 2026-03-01 00:49:08.108753 | orchestrator | changed: [testbed-manager] 2026-03-01 00:49:08.108773 | orchestrator | 2026-03-01 00:49:08.108780 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-01 00:49:08.108787 | orchestrator | Sunday 01 March 2026 00:48:59 +0000 (0:00:00.922) 0:00:12.998 ********** 2026-03-01 00:49:08.108793 | orchestrator | ok: [testbed-manager] 2026-03-01 00:49:08.108797 | orchestrator | 2026-03-01 00:49:08.108801 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-01 00:49:08.108804 | orchestrator | Sunday 01 March 2026 00:49:01 +0000 (0:00:01.246) 0:00:14.245 ********** 2026-03-01 00:49:08.108808 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:49:08.108812 | orchestrator | 2026-03-01 00:49:08.108816 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-01 00:49:08.108819 | orchestrator | Sunday 01 March 2026 00:49:01 +0000 (0:00:00.162) 0:00:14.407 ********** 2026-03-01 00:49:08.108823 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:49:08.108827 | orchestrator | 2026-03-01 00:49:08.108834 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-01 00:49:08.108840 | orchestrator | Sunday 01 March 2026 00:49:01 +0000 (0:00:00.149) 0:00:14.556 ********** 2026-03-01 00:49:08.108846 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:49:08.108852 | orchestrator | 2026-03-01 00:49:08.108859 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-01 00:49:08.108866 | orchestrator | Sunday 01 March 2026 00:49:01 +0000 (0:00:00.162) 0:00:14.719 ********** 2026-03-01 00:49:08.108872 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:49:08.108879 | orchestrator | 2026-03-01 00:49:08.108885 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-01 00:49:08.108892 | orchestrator | Sunday 01 March 2026 00:49:01 +0000 (0:00:00.146) 0:00:14.865 ********** 2026-03-01 00:49:08.108896 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:49:08.108900 | orchestrator | 2026-03-01 00:49:08.108904 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-01 00:49:08.108907 | orchestrator | Sunday 01 March 2026 00:49:01 +0000 (0:00:00.175) 0:00:15.041 ********** 2026-03-01 00:49:08.108911 | orchestrator | changed: [testbed-manager] 2026-03-01 00:49:08.108915 | orchestrator | 2026-03-01 00:49:08.108919 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-01 00:49:08.108925 | orchestrator | Sunday 01 March 2026 00:49:03 +0000 (0:00:01.211) 0:00:16.252 ********** 2026-03-01 00:49:08.108932 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-01 00:49:08.108938 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-01 00:49:08.108945 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-01 00:49:08.108952 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-01 00:49:08.108959 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-01 00:49:08.108965 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-01 00:49:08.108971 | orchestrator | 2026-03-01 00:49:08.108977 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-01 00:49:08.108984 | orchestrator | Sunday 01 March 2026 00:49:05 +0000 (0:00:02.171) 0:00:18.424 ********** 2026-03-01 00:49:08.108990 | orchestrator | ok: [testbed-manager] 2026-03-01 00:49:08.108996 | orchestrator | 2026-03-01 00:49:08.109003 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-01 00:49:08.109010 | orchestrator | Sunday 01 March 2026 00:49:06 +0000 (0:00:01.189) 0:00:19.613 ********** 2026-03-01 00:49:08.109016 | orchestrator | changed: [testbed-manager] 2026-03-01 00:49:08.109023 | orchestrator | 2026-03-01 00:49:08.109029 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:49:08.109040 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 00:49:08.109047 | orchestrator | 2026-03-01 00:49:08.109053 | orchestrator | 2026-03-01 00:49:08.109072 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:49:08.109079 | orchestrator | Sunday 01 March 2026 00:49:07 +0000 (0:00:01.308) 0:00:20.922 ********** 2026-03-01 00:49:08.109085 | orchestrator | =============================================================================== 2026-03-01 00:49:08.109091 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.42s 2026-03-01 00:49:08.109097 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.17s 2026-03-01 00:49:08.109104 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2026-03-01 00:49:08.109111 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.25s 2026-03-01 00:49:08.109117 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.21s 2026-03-01 00:49:08.109124 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.19s 2026-03-01 00:49:08.109130 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.18s 2026-03-01 00:49:08.109137 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-03-01 00:49:08.109144 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-03-01 00:49:08.109150 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.19s 2026-03-01 00:49:08.109156 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-03-01 00:49:08.109163 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-01 00:49:08.109170 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-03-01 00:49:08.109176 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.15s 2026-03-01 00:49:08.109182 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-03-01 00:49:08.297614 | orchestrator | 2026-03-01 00:49:08.298951 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Mar 1 00:49:08 UTC 2026 2026-03-01 00:49:08.298977 | orchestrator | 2026-03-01 00:49:10.085065 | orchestrator | 2026-03-01 00:49:10 | INFO  | Collection nutshell is prepared for execution 2026-03-01 00:49:10.085150 | orchestrator | 2026-03-01 00:49:10 | INFO  | A [0] - dotfiles 2026-03-01 00:49:20.154206 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - homer 2026-03-01 00:49:20.154279 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - netdata 2026-03-01 00:49:20.154286 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - openstackclient 2026-03-01 00:49:20.154291 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - phpmyadmin 2026-03-01 00:49:20.154399 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - common 2026-03-01 00:49:20.158943 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- loadbalancer 2026-03-01 00:49:20.159040 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [2] --- opensearch 2026-03-01 00:49:20.159053 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [2] --- mariadb-ng 2026-03-01 00:49:20.159885 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [3] ---- horizon 2026-03-01 00:49:20.159927 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [3] ---- keystone 2026-03-01 00:49:20.159940 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- neutron 2026-03-01 00:49:20.159967 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [5] ------ wait-for-nova 2026-03-01 00:49:20.159980 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [6] ------- octavia 2026-03-01 00:49:20.161999 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- barbican 2026-03-01 00:49:20.162150 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- designate 2026-03-01 00:49:20.162165 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- ironic 2026-03-01 00:49:20.162784 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- placement 2026-03-01 00:49:20.162884 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- magnum 2026-03-01 00:49:20.163583 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- openvswitch 2026-03-01 00:49:20.163621 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [2] --- ovn 2026-03-01 00:49:20.165001 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- memcached 2026-03-01 00:49:20.165035 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- redis 2026-03-01 00:49:20.165045 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- rabbitmq-ng 2026-03-01 00:49:20.165055 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - kubernetes 2026-03-01 00:49:20.167419 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- kubeconfig 2026-03-01 00:49:20.167488 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- copy-kubeconfig 2026-03-01 00:49:20.167653 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [0] - ceph 2026-03-01 00:49:20.169845 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [1] -- ceph-pools 2026-03-01 00:49:20.170062 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [2] --- copy-ceph-keys 2026-03-01 00:49:20.170093 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [3] ---- cephclient 2026-03-01 00:49:20.170407 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-01 00:49:20.170480 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- wait-for-keystone 2026-03-01 00:49:20.170492 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-01 00:49:20.171119 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [5] ------ glance 2026-03-01 00:49:20.171156 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [5] ------ cinder 2026-03-01 00:49:20.171211 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [5] ------ nova 2026-03-01 00:49:20.171221 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [4] ----- prometheus 2026-03-01 00:49:20.171230 | orchestrator | 2026-03-01 00:49:20 | INFO  | A [5] ------ grafana 2026-03-01 00:49:20.366563 | orchestrator | 2026-03-01 00:49:20 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-01 00:49:20.366636 | orchestrator | 2026-03-01 00:49:20 | INFO  | Tasks are running in the background 2026-03-01 00:49:23.652028 | orchestrator | 2026-03-01 00:49:23 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-01 00:49:25.776058 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:25.776152 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:25.776707 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:25.778490 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:25.780011 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:25.780349 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:25.781879 | orchestrator | 2026-03-01 00:49:25 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:25.781950 | orchestrator | 2026-03-01 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:28.841690 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:28.841795 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:28.843911 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:28.845584 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:28.845921 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:28.848724 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:28.851115 | orchestrator | 2026-03-01 00:49:28 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:28.851172 | orchestrator | 2026-03-01 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:31.889732 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:31.892982 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:31.893167 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:31.893998 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:31.894176 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:31.894847 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:31.895562 | orchestrator | 2026-03-01 00:49:31 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:31.895612 | orchestrator | 2026-03-01 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:34.942287 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:34.942414 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:34.942928 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:34.943471 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:34.943900 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:34.944769 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:34.946991 | orchestrator | 2026-03-01 00:49:34 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:34.947295 | orchestrator | 2026-03-01 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:37.988032 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:37.991100 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:37.992541 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:37.993309 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:37.994916 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:37.995383 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:37.996166 | orchestrator | 2026-03-01 00:49:37 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:37.996198 | orchestrator | 2026-03-01 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:41.099182 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:41.099273 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:41.099282 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:41.099290 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:41.099297 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:41.099304 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:41.099311 | orchestrator | 2026-03-01 00:49:41 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:41.099318 | orchestrator | 2026-03-01 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:44.115575 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:44.115691 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:44.116132 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:44.117769 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:44.117819 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:44.119396 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:44.119452 | orchestrator | 2026-03-01 00:49:44 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:44.129869 | orchestrator | 2026-03-01 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:47.238982 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:47.293872 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:47.293943 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:47.293953 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state STARTED 2026-03-01 00:49:47.293964 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:47.293975 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:47.293999 | orchestrator | 2026-03-01 00:49:47 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:47.294009 | orchestrator | 2026-03-01 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:50.336569 | orchestrator | 2026-03-01 00:49:50.336716 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-01 00:49:50.336738 | orchestrator | 2026-03-01 00:49:50.336754 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-01 00:49:50.336771 | orchestrator | Sunday 01 March 2026 00:49:33 +0000 (0:00:01.109) 0:00:01.109 ********** 2026-03-01 00:49:50.336786 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:49:50.336798 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:49:50.336806 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:49:50.336814 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:49:50.336822 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:49:50.336830 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:49:50.336838 | orchestrator | changed: [testbed-manager] 2026-03-01 00:49:50.336846 | orchestrator | 2026-03-01 00:49:50.336854 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-01 00:49:50.336862 | orchestrator | Sunday 01 March 2026 00:49:37 +0000 (0:00:03.654) 0:00:04.764 ********** 2026-03-01 00:49:50.336871 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-01 00:49:50.336879 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-01 00:49:50.336887 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-01 00:49:50.336895 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-01 00:49:50.336903 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-01 00:49:50.336912 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-01 00:49:50.336921 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-01 00:49:50.336930 | orchestrator | 2026-03-01 00:49:50.336940 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-01 00:49:50.336950 | orchestrator | Sunday 01 March 2026 00:49:39 +0000 (0:00:02.731) 0:00:07.496 ********** 2026-03-01 00:49:50.336968 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:38.006741', 'end': '2026-03-01 00:49:38.014100', 'delta': '0:00:00.007359', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337003 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:37.977121', 'end': '2026-03-01 00:49:37.985680', 'delta': '0:00:00.008559', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337020 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:38.306734', 'end': '2026-03-01 00:49:39.313952', 'delta': '0:00:01.007218', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337108 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:38.498274', 'end': '2026-03-01 00:49:39.505533', 'delta': '0:00:01.007259', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337125 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:37.967220', 'end': '2026-03-01 00:49:37.971227', 'delta': '0:00:00.004007', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337140 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:38.576153', 'end': '2026-03-01 00:49:38.584485', 'delta': '0:00:00.008332', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337152 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-01 00:49:38.707576', 'end': '2026-03-01 00:49:38.713774', 'delta': '0:00:00.006198', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-01 00:49:50.337165 | orchestrator | 2026-03-01 00:49:50.337177 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-01 00:49:50.337191 | orchestrator | Sunday 01 March 2026 00:49:41 +0000 (0:00:01.532) 0:00:09.028 ********** 2026-03-01 00:49:50.337231 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-01 00:49:50.337245 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-01 00:49:50.337258 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-01 00:49:50.337267 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-01 00:49:50.337274 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-01 00:49:50.337282 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-01 00:49:50.337290 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-01 00:49:50.337298 | orchestrator | 2026-03-01 00:49:50.337306 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-01 00:49:50.337314 | orchestrator | Sunday 01 March 2026 00:49:44 +0000 (0:00:02.636) 0:00:11.664 ********** 2026-03-01 00:49:50.337322 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-01 00:49:50.337330 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-01 00:49:50.337338 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-01 00:49:50.337345 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-01 00:49:50.337353 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-01 00:49:50.337361 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-01 00:49:50.337369 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-01 00:49:50.337376 | orchestrator | 2026-03-01 00:49:50.337384 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:49:50.337400 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337411 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337419 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337427 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337435 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337473 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337481 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:49:50.337489 | orchestrator | 2026-03-01 00:49:50.337497 | orchestrator | 2026-03-01 00:49:50.337505 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:49:50.337513 | orchestrator | Sunday 01 March 2026 00:49:46 +0000 (0:00:02.918) 0:00:14.583 ********** 2026-03-01 00:49:50.337521 | orchestrator | =============================================================================== 2026-03-01 00:49:50.337529 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.65s 2026-03-01 00:49:50.337536 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.92s 2026-03-01 00:49:50.337544 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.73s 2026-03-01 00:49:50.337871 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.64s 2026-03-01 00:49:50.337895 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.53s 2026-03-01 00:49:50.337904 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:50.337912 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:50.337931 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:49:50.337943 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:50.337966 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task 8419fdd5-5a57-4fdc-a487-14e6a8d0cad1 is in state SUCCESS 2026-03-01 00:49:50.337979 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:50.337999 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:50.338144 | orchestrator | 2026-03-01 00:49:50 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:50.338171 | orchestrator | 2026-03-01 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:53.421979 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:53.422138 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:53.422904 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:49:53.425529 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:53.426927 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:53.429090 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:53.429836 | orchestrator | 2026-03-01 00:49:53 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:53.429867 | orchestrator | 2026-03-01 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:56.471640 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:56.471749 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:56.471766 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:49:56.472263 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:56.473602 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:56.476999 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:56.478229 | orchestrator | 2026-03-01 00:49:56 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:56.478275 | orchestrator | 2026-03-01 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:49:59.569785 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:49:59.569871 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:49:59.569880 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:49:59.569887 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:49:59.569894 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:49:59.569923 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:49:59.569930 | orchestrator | 2026-03-01 00:49:59 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:49:59.569937 | orchestrator | 2026-03-01 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:02.628064 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:02.628151 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:50:02.628392 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:02.629216 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:50:02.629759 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:02.630652 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:02.631318 | orchestrator | 2026-03-01 00:50:02 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:02.631338 | orchestrator | 2026-03-01 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:05.727276 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:05.727383 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:50:05.729046 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:05.729227 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:50:05.730958 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:05.731272 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:05.733143 | orchestrator | 2026-03-01 00:50:05 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:05.733193 | orchestrator | 2026-03-01 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:08.874898 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:08.875004 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:50:08.875017 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:08.875027 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:50:08.875037 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:08.875046 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:08.875054 | orchestrator | 2026-03-01 00:50:08 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:08.875064 | orchestrator | 2026-03-01 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:11.878134 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:11.878211 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:50:11.878223 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:11.878231 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:50:11.878236 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:11.878242 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:11.878248 | orchestrator | 2026-03-01 00:50:11 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:11.878255 | orchestrator | 2026-03-01 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:15.017281 | orchestrator | 2026-03-01 00:50:14 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:15.017328 | orchestrator | 2026-03-01 00:50:14 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:50:15.017333 | orchestrator | 2026-03-01 00:50:14 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:15.017336 | orchestrator | 2026-03-01 00:50:14 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state STARTED 2026-03-01 00:50:15.017340 | orchestrator | 2026-03-01 00:50:14 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:15.017343 | orchestrator | 2026-03-01 00:50:15 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:15.017346 | orchestrator | 2026-03-01 00:50:15 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:15.017350 | orchestrator | 2026-03-01 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:18.053570 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:18.053959 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state STARTED 2026-03-01 00:50:18.054490 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:18.054823 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task 87bf135b-1627-4c65-9237-d3440ed67768 is in state SUCCESS 2026-03-01 00:50:18.059247 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:18.059300 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:18.059307 | orchestrator | 2026-03-01 00:50:18 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:18.059313 | orchestrator | 2026-03-01 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:21.109073 | orchestrator | 2026-03-01 00:50:21 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:21.109133 | orchestrator | 2026-03-01 00:50:21 | INFO  | Task dcc6d59c-7ac3-4198-a978-fe238bc4707c is in state SUCCESS 2026-03-01 00:50:21.111693 | orchestrator | 2026-03-01 00:50:21 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:21.112513 | orchestrator | 2026-03-01 00:50:21 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:21.114098 | orchestrator | 2026-03-01 00:50:21 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:21.114471 | orchestrator | 2026-03-01 00:50:21 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:21.114485 | orchestrator | 2026-03-01 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:24.144556 | orchestrator | 2026-03-01 00:50:24 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:24.145359 | orchestrator | 2026-03-01 00:50:24 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:24.147316 | orchestrator | 2026-03-01 00:50:24 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:24.151118 | orchestrator | 2026-03-01 00:50:24 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:24.151726 | orchestrator | 2026-03-01 00:50:24 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:24.151819 | orchestrator | 2026-03-01 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:27.268048 | orchestrator | 2026-03-01 00:50:27 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:27.274912 | orchestrator | 2026-03-01 00:50:27 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:27.276830 | orchestrator | 2026-03-01 00:50:27 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:27.278713 | orchestrator | 2026-03-01 00:50:27 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:27.282397 | orchestrator | 2026-03-01 00:50:27 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:27.284093 | orchestrator | 2026-03-01 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:30.325861 | orchestrator | 2026-03-01 00:50:30 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:30.328662 | orchestrator | 2026-03-01 00:50:30 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:30.330953 | orchestrator | 2026-03-01 00:50:30 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:30.332238 | orchestrator | 2026-03-01 00:50:30 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:30.334818 | orchestrator | 2026-03-01 00:50:30 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:30.335715 | orchestrator | 2026-03-01 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:33.367307 | orchestrator | 2026-03-01 00:50:33 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:33.367920 | orchestrator | 2026-03-01 00:50:33 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:33.368326 | orchestrator | 2026-03-01 00:50:33 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:33.369173 | orchestrator | 2026-03-01 00:50:33 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:33.369906 | orchestrator | 2026-03-01 00:50:33 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:33.369942 | orchestrator | 2026-03-01 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:36.422095 | orchestrator | 2026-03-01 00:50:36 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:36.422920 | orchestrator | 2026-03-01 00:50:36 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:36.424759 | orchestrator | 2026-03-01 00:50:36 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:36.425763 | orchestrator | 2026-03-01 00:50:36 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:36.427901 | orchestrator | 2026-03-01 00:50:36 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:36.427951 | orchestrator | 2026-03-01 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:39.517203 | orchestrator | 2026-03-01 00:50:39 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:39.517291 | orchestrator | 2026-03-01 00:50:39 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:39.517308 | orchestrator | 2026-03-01 00:50:39 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:39.518384 | orchestrator | 2026-03-01 00:50:39 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:39.521392 | orchestrator | 2026-03-01 00:50:39 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:39.521546 | orchestrator | 2026-03-01 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:42.597463 | orchestrator | 2026-03-01 00:50:42 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:42.597962 | orchestrator | 2026-03-01 00:50:42 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:42.599192 | orchestrator | 2026-03-01 00:50:42 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:42.601342 | orchestrator | 2026-03-01 00:50:42 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:42.602292 | orchestrator | 2026-03-01 00:50:42 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:42.602331 | orchestrator | 2026-03-01 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:45.637601 | orchestrator | 2026-03-01 00:50:45 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:45.647845 | orchestrator | 2026-03-01 00:50:45 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:45.652565 | orchestrator | 2026-03-01 00:50:45 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:45.654159 | orchestrator | 2026-03-01 00:50:45 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:45.658188 | orchestrator | 2026-03-01 00:50:45 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:45.658230 | orchestrator | 2026-03-01 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:48.703557 | orchestrator | 2026-03-01 00:50:48 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:48.704745 | orchestrator | 2026-03-01 00:50:48 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:48.706525 | orchestrator | 2026-03-01 00:50:48 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:48.708865 | orchestrator | 2026-03-01 00:50:48 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:48.709567 | orchestrator | 2026-03-01 00:50:48 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:48.709597 | orchestrator | 2026-03-01 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:51.742313 | orchestrator | 2026-03-01 00:50:51 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:51.742776 | orchestrator | 2026-03-01 00:50:51 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:51.743781 | orchestrator | 2026-03-01 00:50:51 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:51.744804 | orchestrator | 2026-03-01 00:50:51 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:51.746676 | orchestrator | 2026-03-01 00:50:51 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:51.746735 | orchestrator | 2026-03-01 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:54.769997 | orchestrator | 2026-03-01 00:50:54 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:54.771699 | orchestrator | 2026-03-01 00:50:54 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:54.772678 | orchestrator | 2026-03-01 00:50:54 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:54.775097 | orchestrator | 2026-03-01 00:50:54 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:54.777006 | orchestrator | 2026-03-01 00:50:54 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:54.777067 | orchestrator | 2026-03-01 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:50:57.814508 | orchestrator | 2026-03-01 00:50:57 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:50:57.815808 | orchestrator | 2026-03-01 00:50:57 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state STARTED 2026-03-01 00:50:57.817202 | orchestrator | 2026-03-01 00:50:57 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:50:57.818813 | orchestrator | 2026-03-01 00:50:57 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:50:57.820128 | orchestrator | 2026-03-01 00:50:57 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:50:57.820285 | orchestrator | 2026-03-01 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:00.856031 | orchestrator | 2026-03-01 00:51:00 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:00.856803 | orchestrator | 2026-03-01 00:51:00 | INFO  | Task cdb23ea9-2276-4c94-9f8a-ba09ed31c827 is in state SUCCESS 2026-03-01 00:51:00.857184 | orchestrator | 2026-03-01 00:51:00.857200 | orchestrator | 2026-03-01 00:51:00.857204 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-01 00:51:00.857210 | orchestrator | 2026-03-01 00:51:00.857217 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-01 00:51:00.857223 | orchestrator | Sunday 01 March 2026 00:49:34 +0000 (0:00:01.332) 0:00:01.332 ********** 2026-03-01 00:51:00.857230 | orchestrator | ok: [testbed-manager] => { 2026-03-01 00:51:00.857237 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-01 00:51:00.857244 | orchestrator | } 2026-03-01 00:51:00.857251 | orchestrator | 2026-03-01 00:51:00.857257 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-01 00:51:00.857264 | orchestrator | Sunday 01 March 2026 00:49:35 +0000 (0:00:00.214) 0:00:01.547 ********** 2026-03-01 00:51:00.857271 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.857287 | orchestrator | 2026-03-01 00:51:00.857299 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-01 00:51:00.857304 | orchestrator | Sunday 01 March 2026 00:49:36 +0000 (0:00:00.981) 0:00:02.529 ********** 2026-03-01 00:51:00.857322 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-01 00:51:00.857326 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-01 00:51:00.857330 | orchestrator | 2026-03-01 00:51:00.857334 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-01 00:51:00.857337 | orchestrator | Sunday 01 March 2026 00:49:37 +0000 (0:00:01.588) 0:00:04.118 ********** 2026-03-01 00:51:00.857341 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857345 | orchestrator | 2026-03-01 00:51:00.857349 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-01 00:51:00.857352 | orchestrator | Sunday 01 March 2026 00:49:40 +0000 (0:00:02.970) 0:00:07.088 ********** 2026-03-01 00:51:00.857356 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857360 | orchestrator | 2026-03-01 00:51:00.857364 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-01 00:51:00.857367 | orchestrator | Sunday 01 March 2026 00:49:43 +0000 (0:00:02.961) 0:00:10.050 ********** 2026-03-01 00:51:00.857371 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-01 00:51:00.857375 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.857378 | orchestrator | 2026-03-01 00:51:00.857382 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-01 00:51:00.857426 | orchestrator | Sunday 01 March 2026 00:50:11 +0000 (0:00:28.393) 0:00:38.444 ********** 2026-03-01 00:51:00.857430 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857434 | orchestrator | 2026-03-01 00:51:00.857438 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:51:00.857442 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:00.857447 | orchestrator | 2026-03-01 00:51:00.857451 | orchestrator | 2026-03-01 00:51:00.857455 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:51:00.857459 | orchestrator | Sunday 01 March 2026 00:50:14 +0000 (0:00:02.750) 0:00:41.195 ********** 2026-03-01 00:51:00.857462 | orchestrator | =============================================================================== 2026-03-01 00:51:00.857466 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.39s 2026-03-01 00:51:00.857472 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.97s 2026-03-01 00:51:00.857478 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.96s 2026-03-01 00:51:00.857488 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.75s 2026-03-01 00:51:00.857495 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.59s 2026-03-01 00:51:00.857501 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.98s 2026-03-01 00:51:00.857507 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.22s 2026-03-01 00:51:00.857513 | orchestrator | 2026-03-01 00:51:00.857519 | orchestrator | 2026-03-01 00:51:00.857525 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-01 00:51:00.857531 | orchestrator | 2026-03-01 00:51:00.857537 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-01 00:51:00.857542 | orchestrator | Sunday 01 March 2026 00:49:32 +0000 (0:00:00.862) 0:00:00.862 ********** 2026-03-01 00:51:00.857547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-01 00:51:00.857555 | orchestrator | 2026-03-01 00:51:00.857561 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-01 00:51:00.857567 | orchestrator | Sunday 01 March 2026 00:49:33 +0000 (0:00:00.902) 0:00:01.764 ********** 2026-03-01 00:51:00.857574 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-01 00:51:00.857590 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-01 00:51:00.857596 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-01 00:51:00.857600 | orchestrator | 2026-03-01 00:51:00.857604 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-01 00:51:00.857608 | orchestrator | Sunday 01 March 2026 00:49:35 +0000 (0:00:01.958) 0:00:03.723 ********** 2026-03-01 00:51:00.857612 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857615 | orchestrator | 2026-03-01 00:51:00.857619 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-01 00:51:00.857623 | orchestrator | Sunday 01 March 2026 00:49:37 +0000 (0:00:01.885) 0:00:05.608 ********** 2026-03-01 00:51:00.857633 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-01 00:51:00.857637 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.857641 | orchestrator | 2026-03-01 00:51:00.857645 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-01 00:51:00.857649 | orchestrator | Sunday 01 March 2026 00:50:10 +0000 (0:00:33.238) 0:00:38.847 ********** 2026-03-01 00:51:00.857653 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857656 | orchestrator | 2026-03-01 00:51:00.857689 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-01 00:51:00.857693 | orchestrator | Sunday 01 March 2026 00:50:12 +0000 (0:00:01.395) 0:00:40.243 ********** 2026-03-01 00:51:00.857697 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.857701 | orchestrator | 2026-03-01 00:51:00.857705 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-01 00:51:00.857709 | orchestrator | Sunday 01 March 2026 00:50:13 +0000 (0:00:00.925) 0:00:41.169 ********** 2026-03-01 00:51:00.857713 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857716 | orchestrator | 2026-03-01 00:51:00.857720 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-01 00:51:00.857724 | orchestrator | Sunday 01 March 2026 00:50:16 +0000 (0:00:03.047) 0:00:44.216 ********** 2026-03-01 00:51:00.857728 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857732 | orchestrator | 2026-03-01 00:51:00.857735 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-01 00:51:00.857739 | orchestrator | Sunday 01 March 2026 00:50:16 +0000 (0:00:00.742) 0:00:44.959 ********** 2026-03-01 00:51:00.857748 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.857752 | orchestrator | 2026-03-01 00:51:00.857756 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-01 00:51:00.857760 | orchestrator | Sunday 01 March 2026 00:50:17 +0000 (0:00:00.459) 0:00:45.419 ********** 2026-03-01 00:51:00.857763 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.857767 | orchestrator | 2026-03-01 00:51:00.857771 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:51:00.857775 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:00.857779 | orchestrator | 2026-03-01 00:51:00.857783 | orchestrator | 2026-03-01 00:51:00.857786 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:51:00.857790 | orchestrator | Sunday 01 March 2026 00:50:17 +0000 (0:00:00.324) 0:00:45.743 ********** 2026-03-01 00:51:00.857794 | orchestrator | =============================================================================== 2026-03-01 00:51:00.857797 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.24s 2026-03-01 00:51:00.857801 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.05s 2026-03-01 00:51:00.857805 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.96s 2026-03-01 00:51:00.857809 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.89s 2026-03-01 00:51:00.857812 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.40s 2026-03-01 00:51:00.857819 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.93s 2026-03-01 00:51:00.857824 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.90s 2026-03-01 00:51:00.857830 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.74s 2026-03-01 00:51:00.857835 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.46s 2026-03-01 00:51:00.857839 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2026-03-01 00:51:00.857846 | orchestrator | 2026-03-01 00:51:00.857855 | orchestrator | 2026-03-01 00:51:00.857864 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-01 00:51:00.857940 | orchestrator | 2026-03-01 00:51:00.857947 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-01 00:51:00.857953 | orchestrator | Sunday 01 March 2026 00:49:53 +0000 (0:00:00.261) 0:00:00.261 ********** 2026-03-01 00:51:00.857959 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.857965 | orchestrator | 2026-03-01 00:51:00.857971 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-01 00:51:00.857977 | orchestrator | Sunday 01 March 2026 00:49:54 +0000 (0:00:00.819) 0:00:01.081 ********** 2026-03-01 00:51:00.857983 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-01 00:51:00.857990 | orchestrator | 2026-03-01 00:51:00.857996 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-01 00:51:00.858002 | orchestrator | Sunday 01 March 2026 00:49:54 +0000 (0:00:00.503) 0:00:01.584 ********** 2026-03-01 00:51:00.858009 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.858047 | orchestrator | 2026-03-01 00:51:00.858052 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-01 00:51:00.858056 | orchestrator | Sunday 01 March 2026 00:49:55 +0000 (0:00:01.091) 0:00:02.675 ********** 2026-03-01 00:51:00.858061 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-01 00:51:00.858066 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:00.858070 | orchestrator | 2026-03-01 00:51:00.858075 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-01 00:51:00.858079 | orchestrator | Sunday 01 March 2026 00:50:47 +0000 (0:00:51.693) 0:00:54.369 ********** 2026-03-01 00:51:00.858084 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:00.858088 | orchestrator | 2026-03-01 00:51:00.858092 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:51:00.858097 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:00.858101 | orchestrator | 2026-03-01 00:51:00.858105 | orchestrator | 2026-03-01 00:51:00.858110 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:51:00.858120 | orchestrator | Sunday 01 March 2026 00:50:57 +0000 (0:00:10.559) 0:01:04.928 ********** 2026-03-01 00:51:00.858126 | orchestrator | =============================================================================== 2026-03-01 00:51:00.858130 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.69s 2026-03-01 00:51:00.858134 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 10.56s 2026-03-01 00:51:00.858139 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.09s 2026-03-01 00:51:00.858143 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.82s 2026-03-01 00:51:00.858148 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.50s 2026-03-01 00:51:00.858152 | orchestrator | 2026-03-01 00:51:00 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:00.859295 | orchestrator | 2026-03-01 00:51:00 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:00.859916 | orchestrator | 2026-03-01 00:51:00 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state STARTED 2026-03-01 00:51:00.859949 | orchestrator | 2026-03-01 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:03.909545 | orchestrator | 2026-03-01 00:51:03 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:03.910482 | orchestrator | 2026-03-01 00:51:03 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:03.917354 | orchestrator | 2026-03-01 00:51:03 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:03.917699 | orchestrator | 2026-03-01 00:51:03.917717 | orchestrator | 2026-03-01 00:51:03 | INFO  | Task 1b6f4b84-fc74-49b3-9b40-f39adcdc968e is in state SUCCESS 2026-03-01 00:51:03.920762 | orchestrator | 2026-03-01 00:51:03.920803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:51:03.920811 | orchestrator | 2026-03-01 00:51:03.920817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:51:03.920824 | orchestrator | Sunday 01 March 2026 00:49:32 +0000 (0:00:00.653) 0:00:00.653 ********** 2026-03-01 00:51:03.920831 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-01 00:51:03.920837 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-01 00:51:03.920843 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-01 00:51:03.920850 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-01 00:51:03.920856 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-01 00:51:03.920863 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-01 00:51:03.920870 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-01 00:51:03.920876 | orchestrator | 2026-03-01 00:51:03.920883 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-01 00:51:03.920890 | orchestrator | 2026-03-01 00:51:03.920896 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-01 00:51:03.920908 | orchestrator | Sunday 01 March 2026 00:49:35 +0000 (0:00:02.306) 0:00:02.960 ********** 2026-03-01 00:51:03.920917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:51:03.920925 | orchestrator | 2026-03-01 00:51:03.920929 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-01 00:51:03.920933 | orchestrator | Sunday 01 March 2026 00:49:36 +0000 (0:00:01.591) 0:00:04.551 ********** 2026-03-01 00:51:03.920937 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:51:03.920942 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:03.920945 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:51:03.920949 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:51:03.920953 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:51:03.920957 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:51:03.920961 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:51:03.920964 | orchestrator | 2026-03-01 00:51:03.920968 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-01 00:51:03.920972 | orchestrator | Sunday 01 March 2026 00:49:38 +0000 (0:00:02.069) 0:00:06.620 ********** 2026-03-01 00:51:03.920976 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:51:03.920980 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:51:03.920984 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:51:03.920987 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:51:03.920991 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:51:03.920995 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:51:03.920999 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:03.921002 | orchestrator | 2026-03-01 00:51:03.921006 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-01 00:51:03.921010 | orchestrator | Sunday 01 March 2026 00:49:43 +0000 (0:00:04.314) 0:00:10.935 ********** 2026-03-01 00:51:03.921024 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:03.921028 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:03.921031 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:03.921035 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:03.921039 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:03.921043 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:03.921046 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:03.921050 | orchestrator | 2026-03-01 00:51:03.921054 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-01 00:51:03.921058 | orchestrator | Sunday 01 March 2026 00:49:47 +0000 (0:00:04.175) 0:00:15.111 ********** 2026-03-01 00:51:03.921062 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:03.921065 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:03.921069 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:03.921073 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:03.921076 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:03.921081 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:03.921087 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:03.921097 | orchestrator | 2026-03-01 00:51:03.921104 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-01 00:51:03.921110 | orchestrator | Sunday 01 March 2026 00:49:57 +0000 (0:00:10.357) 0:00:25.468 ********** 2026-03-01 00:51:03.921116 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:03.921122 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:03.921127 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:03.921133 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:03.921139 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:03.921145 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:03.921150 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:03.921157 | orchestrator | 2026-03-01 00:51:03.921163 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-01 00:51:03.921170 | orchestrator | Sunday 01 March 2026 00:50:34 +0000 (0:00:36.989) 0:01:02.457 ********** 2026-03-01 00:51:03.921177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:51:03.921184 | orchestrator | 2026-03-01 00:51:03.921191 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-01 00:51:03.921197 | orchestrator | Sunday 01 March 2026 00:50:36 +0000 (0:00:01.349) 0:01:03.807 ********** 2026-03-01 00:51:03.921203 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-01 00:51:03.921210 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-01 00:51:03.921217 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-01 00:51:03.921223 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-01 00:51:03.921239 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-01 00:51:03.921245 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-01 00:51:03.921252 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-01 00:51:03.921257 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-01 00:51:03.921261 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-01 00:51:03.921264 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-01 00:51:03.921268 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-01 00:51:03.921272 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-01 00:51:03.921276 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-01 00:51:03.921279 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-01 00:51:03.921283 | orchestrator | 2026-03-01 00:51:03.921287 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-01 00:51:03.921296 | orchestrator | Sunday 01 March 2026 00:50:41 +0000 (0:00:05.243) 0:01:09.051 ********** 2026-03-01 00:51:03.921299 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:03.921303 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:51:03.921308 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:51:03.921311 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:51:03.921318 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:51:03.921321 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:51:03.921325 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:51:03.921329 | orchestrator | 2026-03-01 00:51:03.921333 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-01 00:51:03.921336 | orchestrator | Sunday 01 March 2026 00:50:42 +0000 (0:00:01.289) 0:01:10.340 ********** 2026-03-01 00:51:03.921340 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:03.921344 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:03.921348 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:03.921351 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:03.921356 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:03.921361 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:03.921365 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:03.921370 | orchestrator | 2026-03-01 00:51:03.921374 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-01 00:51:03.921378 | orchestrator | Sunday 01 March 2026 00:50:44 +0000 (0:00:01.584) 0:01:11.924 ********** 2026-03-01 00:51:03.921401 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:03.921406 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:51:03.921411 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:51:03.921415 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:51:03.921420 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:51:03.921424 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:51:03.921429 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:51:03.921433 | orchestrator | 2026-03-01 00:51:03.921438 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-01 00:51:03.921442 | orchestrator | Sunday 01 March 2026 00:50:45 +0000 (0:00:01.215) 0:01:13.139 ********** 2026-03-01 00:51:03.921447 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:51:03.921451 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:51:03.921456 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:51:03.921460 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:51:03.921465 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:03.921469 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:51:03.921473 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:51:03.921478 | orchestrator | 2026-03-01 00:51:03.921482 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-01 00:51:03.921487 | orchestrator | Sunday 01 March 2026 00:50:47 +0000 (0:00:02.606) 0:01:15.746 ********** 2026-03-01 00:51:03.921494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-01 00:51:03.921505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:51:03.921517 | orchestrator | 2026-03-01 00:51:03.921524 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-01 00:51:03.921531 | orchestrator | Sunday 01 March 2026 00:50:49 +0000 (0:00:01.273) 0:01:17.019 ********** 2026-03-01 00:51:03.921537 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:03.921544 | orchestrator | 2026-03-01 00:51:03.921550 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-01 00:51:03.921557 | orchestrator | Sunday 01 March 2026 00:50:50 +0000 (0:00:01.756) 0:01:18.776 ********** 2026-03-01 00:51:03.921563 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:03.921570 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:03.921577 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:03.921583 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:03.921594 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:03.921601 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:03.921609 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:03.921617 | orchestrator | 2026-03-01 00:51:03.921624 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:51:03.921632 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921637 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921641 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921645 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921653 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921657 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921661 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:51:03.921665 | orchestrator | 2026-03-01 00:51:03.921668 | orchestrator | 2026-03-01 00:51:03.921672 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:51:03.921676 | orchestrator | Sunday 01 March 2026 00:51:01 +0000 (0:00:10.921) 0:01:29.698 ********** 2026-03-01 00:51:03.921680 | orchestrator | =============================================================================== 2026-03-01 00:51:03.921684 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 36.99s 2026-03-01 00:51:03.921688 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 10.92s 2026-03-01 00:51:03.921691 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.36s 2026-03-01 00:51:03.921697 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.24s 2026-03-01 00:51:03.921701 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.31s 2026-03-01 00:51:03.921705 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.18s 2026-03-01 00:51:03.921709 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.61s 2026-03-01 00:51:03.921713 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.31s 2026-03-01 00:51:03.921716 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.07s 2026-03-01 00:51:03.921720 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.76s 2026-03-01 00:51:03.921724 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.59s 2026-03-01 00:51:03.921728 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.58s 2026-03-01 00:51:03.921731 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.35s 2026-03-01 00:51:03.921735 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.29s 2026-03-01 00:51:03.921739 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.27s 2026-03-01 00:51:03.921743 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.22s 2026-03-01 00:51:03.921747 | orchestrator | 2026-03-01 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:06.958854 | orchestrator | 2026-03-01 00:51:06 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:06.959766 | orchestrator | 2026-03-01 00:51:06 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:06.964510 | orchestrator | 2026-03-01 00:51:06 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:06.964570 | orchestrator | 2026-03-01 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:10.015718 | orchestrator | 2026-03-01 00:51:10 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:10.017620 | orchestrator | 2026-03-01 00:51:10 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:10.019500 | orchestrator | 2026-03-01 00:51:10 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:10.019618 | orchestrator | 2026-03-01 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:13.052196 | orchestrator | 2026-03-01 00:51:13 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:13.055644 | orchestrator | 2026-03-01 00:51:13 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:13.056626 | orchestrator | 2026-03-01 00:51:13 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:13.057038 | orchestrator | 2026-03-01 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:16.091494 | orchestrator | 2026-03-01 00:51:16 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:16.092077 | orchestrator | 2026-03-01 00:51:16 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:16.092966 | orchestrator | 2026-03-01 00:51:16 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:16.093029 | orchestrator | 2026-03-01 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:19.131930 | orchestrator | 2026-03-01 00:51:19 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:19.133773 | orchestrator | 2026-03-01 00:51:19 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:19.135223 | orchestrator | 2026-03-01 00:51:19 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:19.135271 | orchestrator | 2026-03-01 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:22.186099 | orchestrator | 2026-03-01 00:51:22 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:22.189156 | orchestrator | 2026-03-01 00:51:22 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:22.192138 | orchestrator | 2026-03-01 00:51:22 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:22.192581 | orchestrator | 2026-03-01 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:25.231926 | orchestrator | 2026-03-01 00:51:25 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:25.234143 | orchestrator | 2026-03-01 00:51:25 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:25.235944 | orchestrator | 2026-03-01 00:51:25 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:25.235992 | orchestrator | 2026-03-01 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:28.280352 | orchestrator | 2026-03-01 00:51:28 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:28.281521 | orchestrator | 2026-03-01 00:51:28 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:28.284183 | orchestrator | 2026-03-01 00:51:28 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:28.284247 | orchestrator | 2026-03-01 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:31.323271 | orchestrator | 2026-03-01 00:51:31 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:31.323326 | orchestrator | 2026-03-01 00:51:31 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:31.323334 | orchestrator | 2026-03-01 00:51:31 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:31.323340 | orchestrator | 2026-03-01 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:34.364061 | orchestrator | 2026-03-01 00:51:34 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:34.365074 | orchestrator | 2026-03-01 00:51:34 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:34.368448 | orchestrator | 2026-03-01 00:51:34 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:34.368489 | orchestrator | 2026-03-01 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:37.405277 | orchestrator | 2026-03-01 00:51:37 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:37.406143 | orchestrator | 2026-03-01 00:51:37 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:37.408806 | orchestrator | 2026-03-01 00:51:37 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:37.408875 | orchestrator | 2026-03-01 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:40.446803 | orchestrator | 2026-03-01 00:51:40 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state STARTED 2026-03-01 00:51:40.449527 | orchestrator | 2026-03-01 00:51:40 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:40.449612 | orchestrator | 2026-03-01 00:51:40 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:40.449624 | orchestrator | 2026-03-01 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:43.487597 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task fdf9c491-1fb7-4cf1-bbd9-c3d56f37c70e is in state SUCCESS 2026-03-01 00:51:43.493444 | orchestrator | 2026-03-01 00:51:43.493563 | orchestrator | 2026-03-01 00:51:43.493593 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-01 00:51:43.493617 | orchestrator | 2026-03-01 00:51:43.493642 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-01 00:51:43.493703 | orchestrator | Sunday 01 March 2026 00:49:25 +0000 (0:00:00.329) 0:00:00.329 ********** 2026-03-01 00:51:43.493729 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:51:43.493754 | orchestrator | 2026-03-01 00:51:43.493779 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-01 00:51:43.493804 | orchestrator | Sunday 01 March 2026 00:49:26 +0000 (0:00:01.136) 0:00:01.466 ********** 2026-03-01 00:51:43.493822 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493831 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493841 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493850 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493859 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493892 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493902 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493912 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.493922 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493931 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493941 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.493951 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.493960 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493970 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-01 00:51:43.493977 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493983 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493988 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.493993 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-01 00:51:43.493998 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.494003 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.494008 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-01 00:51:43.494058 | orchestrator | 2026-03-01 00:51:43.494066 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-01 00:51:43.494071 | orchestrator | Sunday 01 March 2026 00:49:30 +0000 (0:00:03.988) 0:00:05.455 ********** 2026-03-01 00:51:43.494077 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:51:43.494083 | orchestrator | 2026-03-01 00:51:43.494089 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-01 00:51:43.494094 | orchestrator | Sunday 01 March 2026 00:49:31 +0000 (0:00:01.196) 0:00:06.651 ********** 2026-03-01 00:51:43.494103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494112 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494199 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.494249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494289 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.494410 | orchestrator | 2026-03-01 00:51:43.494419 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-01 00:51:43.494428 | orchestrator | Sunday 01 March 2026 00:49:36 +0000 (0:00:04.621) 0:00:11.272 ********** 2026-03-01 00:51:43.494437 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494465 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:51:43.494474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494524 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:51:43.494529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494548 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:51:43.494554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494573 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:51:43.494582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494598 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:51:43.494606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494626 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:51:43.494631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494653 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:51:43.494658 | orchestrator | 2026-03-01 00:51:43.494663 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-01 00:51:43.494668 | orchestrator | Sunday 01 March 2026 00:49:37 +0000 (0:00:01.388) 0:00:12.661 ********** 2026-03-01 00:51:43.494674 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494682 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494693 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:51:43.494765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494810 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:51:43.494816 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:51:43.494824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494878 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:51:43.494884 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:51:43.494889 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:51:43.494900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-01 00:51:43.494906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.494917 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:51:43.494922 | orchestrator | 2026-03-01 00:51:43.494927 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-01 00:51:43.494932 | orchestrator | Sunday 01 March 2026 00:49:41 +0000 (0:00:03.556) 0:00:16.217 ********** 2026-03-01 00:51:43.494937 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:51:43.494942 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:51:43.494947 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:51:43.494952 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:51:43.494958 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:51:43.494966 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:51:43.494971 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:51:43.494976 | orchestrator | 2026-03-01 00:51:43.494981 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-01 00:51:43.494986 | orchestrator | Sunday 01 March 2026 00:49:42 +0000 (0:00:01.083) 0:00:17.300 ********** 2026-03-01 00:51:43.494992 | orchestrator | skipping: [testbed-manager] 2026-03-01 00:51:43.494997 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:51:43.495002 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:51:43.495007 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:51:43.495012 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:51:43.495017 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:51:43.495022 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:51:43.495027 | orchestrator | 2026-03-01 00:51:43.495032 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-01 00:51:43.495037 | orchestrator | Sunday 01 March 2026 00:49:43 +0000 (0:00:00.858) 0:00:18.158 ********** 2026-03-01 00:51:43.495042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495062 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495115 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495169 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495190 | orchestrator | 2026-03-01 00:51:43.495195 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-01 00:51:43.495200 | orchestrator | Sunday 01 March 2026 00:49:50 +0000 (0:00:07.663) 0:00:25.822 ********** 2026-03-01 00:51:43.495206 | orchestrator | [WARNING]: Skipped 2026-03-01 00:51:43.495213 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-01 00:51:43.495218 | orchestrator | to this access issue: 2026-03-01 00:51:43.495223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-01 00:51:43.495228 | orchestrator | directory 2026-03-01 00:51:43.495233 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 00:51:43.495239 | orchestrator | 2026-03-01 00:51:43.495244 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-01 00:51:43.495249 | orchestrator | Sunday 01 March 2026 00:49:52 +0000 (0:00:01.902) 0:00:27.724 ********** 2026-03-01 00:51:43.495254 | orchestrator | [WARNING]: Skipped 2026-03-01 00:51:43.495259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-01 00:51:43.495267 | orchestrator | to this access issue: 2026-03-01 00:51:43.495273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-01 00:51:43.495278 | orchestrator | directory 2026-03-01 00:51:43.495283 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 00:51:43.495288 | orchestrator | 2026-03-01 00:51:43.495297 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-01 00:51:43.495302 | orchestrator | Sunday 01 March 2026 00:49:53 +0000 (0:00:00.841) 0:00:28.566 ********** 2026-03-01 00:51:43.495307 | orchestrator | [WARNING]: Skipped 2026-03-01 00:51:43.495312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-01 00:51:43.495317 | orchestrator | to this access issue: 2026-03-01 00:51:43.495322 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-01 00:51:43.495327 | orchestrator | directory 2026-03-01 00:51:43.495332 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 00:51:43.495337 | orchestrator | 2026-03-01 00:51:43.495342 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-01 00:51:43.495347 | orchestrator | Sunday 01 March 2026 00:49:54 +0000 (0:00:00.889) 0:00:29.455 ********** 2026-03-01 00:51:43.495373 | orchestrator | [WARNING]: Skipped 2026-03-01 00:51:43.495382 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-01 00:51:43.495391 | orchestrator | to this access issue: 2026-03-01 00:51:43.495397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-01 00:51:43.495403 | orchestrator | directory 2026-03-01 00:51:43.495408 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 00:51:43.495415 | orchestrator | 2026-03-01 00:51:43.495421 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-01 00:51:43.495426 | orchestrator | Sunday 01 March 2026 00:49:55 +0000 (0:00:00.934) 0:00:30.390 ********** 2026-03-01 00:51:43.495432 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.495438 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.495443 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.495449 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.495455 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.495460 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.495469 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.495475 | orchestrator | 2026-03-01 00:51:43.495481 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-01 00:51:43.495487 | orchestrator | Sunday 01 March 2026 00:50:00 +0000 (0:00:05.020) 0:00:35.411 ********** 2026-03-01 00:51:43.495493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495500 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495511 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495517 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495524 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495529 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-01 00:51:43.495535 | orchestrator | 2026-03-01 00:51:43.495541 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-01 00:51:43.495547 | orchestrator | Sunday 01 March 2026 00:50:05 +0000 (0:00:05.197) 0:00:40.609 ********** 2026-03-01 00:51:43.495552 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.495558 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.495564 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.495570 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.495576 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.495581 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.495587 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.495597 | orchestrator | 2026-03-01 00:51:43.495603 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-01 00:51:43.495609 | orchestrator | Sunday 01 March 2026 00:50:09 +0000 (0:00:03.678) 0:00:44.287 ********** 2026-03-01 00:51:43.495615 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495640 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495656 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495671 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495676 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495685 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495696 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495709 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:51:43.495724 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495729 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495739 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495750 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495758 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495763 | orchestrator | 2026-03-01 00:51:43.495769 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-01 00:51:43.495774 | orchestrator | Sunday 01 March 2026 00:50:11 +0000 (0:00:02.563) 0:00:46.851 ********** 2026-03-01 00:51:43.495779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495784 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495789 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495799 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495804 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495809 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495814 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-01 00:51:43.495819 | orchestrator | 2026-03-01 00:51:43.495824 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-01 00:51:43.495829 | orchestrator | Sunday 01 March 2026 00:50:15 +0000 (0:00:03.189) 0:00:50.041 ********** 2026-03-01 00:51:43.495834 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495849 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495859 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495864 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-01 00:51:43.495869 | orchestrator | 2026-03-01 00:51:43.495875 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-01 00:51:43.495880 | orchestrator | Sunday 01 March 2026 00:50:17 +0000 (0:00:02.816) 0:00:52.858 ********** 2026-03-01 00:51:43.495885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495900 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495920 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-01 00:51:43.495954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495977 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.495996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.496002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.496007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.496020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.496025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:51:43.496030 | orchestrator | 2026-03-01 00:51:43.496035 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-01 00:51:43.496041 | orchestrator | Sunday 01 March 2026 00:50:20 +0000 (0:00:02.930) 0:00:55.788 ********** 2026-03-01 00:51:43.496046 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.496051 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.496056 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.496061 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.496066 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.496071 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.496076 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.496081 | orchestrator | 2026-03-01 00:51:43.496086 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-01 00:51:43.496091 | orchestrator | Sunday 01 March 2026 00:50:22 +0000 (0:00:01.443) 0:00:57.232 ********** 2026-03-01 00:51:43.496097 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.496102 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.496107 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.496112 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.496117 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.496122 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.496127 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.496132 | orchestrator | 2026-03-01 00:51:43.496137 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496142 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:01.112) 0:00:58.344 ********** 2026-03-01 00:51:43.496147 | orchestrator | 2026-03-01 00:51:43.496152 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496157 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:00.061) 0:00:58.406 ********** 2026-03-01 00:51:43.496162 | orchestrator | 2026-03-01 00:51:43.496168 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496173 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:00.062) 0:00:58.468 ********** 2026-03-01 00:51:43.496178 | orchestrator | 2026-03-01 00:51:43.496183 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496188 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:00.166) 0:00:58.634 ********** 2026-03-01 00:51:43.496193 | orchestrator | 2026-03-01 00:51:43.496198 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496203 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:00.059) 0:00:58.694 ********** 2026-03-01 00:51:43.496208 | orchestrator | 2026-03-01 00:51:43.496213 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496218 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:00.059) 0:00:58.753 ********** 2026-03-01 00:51:43.496223 | orchestrator | 2026-03-01 00:51:43.496228 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-01 00:51:43.496238 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:00.058) 0:00:58.812 ********** 2026-03-01 00:51:43.496243 | orchestrator | 2026-03-01 00:51:43.496248 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-01 00:51:43.496255 | orchestrator | Sunday 01 March 2026 00:50:24 +0000 (0:00:00.083) 0:00:58.895 ********** 2026-03-01 00:51:43.496261 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.496266 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.496271 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.496276 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.496281 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.496286 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.496291 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.496296 | orchestrator | 2026-03-01 00:51:43.496301 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-01 00:51:43.496306 | orchestrator | Sunday 01 March 2026 00:50:57 +0000 (0:00:33.847) 0:01:32.742 ********** 2026-03-01 00:51:43.496311 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.496316 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.496321 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.496326 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.496331 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.496336 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.496341 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.496346 | orchestrator | 2026-03-01 00:51:43.496374 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-01 00:51:43.496383 | orchestrator | Sunday 01 March 2026 00:51:28 +0000 (0:00:30.383) 0:02:03.126 ********** 2026-03-01 00:51:43.496399 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:51:43.496414 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:51:43.496426 | orchestrator | ok: [testbed-manager] 2026-03-01 00:51:43.496434 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:51:43.496441 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:51:43.496448 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:51:43.496455 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:51:43.496463 | orchestrator | 2026-03-01 00:51:43.496471 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-01 00:51:43.496479 | orchestrator | Sunday 01 March 2026 00:51:30 +0000 (0:00:02.115) 0:02:05.241 ********** 2026-03-01 00:51:43.496486 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:51:43.496493 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:51:43.496501 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:51:43.496508 | orchestrator | changed: [testbed-manager] 2026-03-01 00:51:43.496516 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:51:43.496524 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:51:43.496536 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:51:43.496544 | orchestrator | 2026-03-01 00:51:43.496552 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:51:43.496563 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496572 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496581 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496590 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496599 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496607 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496622 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-01 00:51:43.496630 | orchestrator | 2026-03-01 00:51:43.496638 | orchestrator | 2026-03-01 00:51:43.496647 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:51:43.496656 | orchestrator | Sunday 01 March 2026 00:51:40 +0000 (0:00:09.937) 0:02:15.178 ********** 2026-03-01 00:51:43.496664 | orchestrator | =============================================================================== 2026-03-01 00:51:43.496673 | orchestrator | common : Restart fluentd container ------------------------------------- 33.85s 2026-03-01 00:51:43.496681 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.38s 2026-03-01 00:51:43.496689 | orchestrator | common : Restart cron container ----------------------------------------- 9.94s 2026-03-01 00:51:43.496698 | orchestrator | common : Copying over config.json files for services -------------------- 7.66s 2026-03-01 00:51:43.496707 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.20s 2026-03-01 00:51:43.496716 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.02s 2026-03-01 00:51:43.496725 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.62s 2026-03-01 00:51:43.496731 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.99s 2026-03-01 00:51:43.496736 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.68s 2026-03-01 00:51:43.496741 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.56s 2026-03-01 00:51:43.496746 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.19s 2026-03-01 00:51:43.496751 | orchestrator | common : Check common containers ---------------------------------------- 2.93s 2026-03-01 00:51:43.496756 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.82s 2026-03-01 00:51:43.496761 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.56s 2026-03-01 00:51:43.496770 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2026-03-01 00:51:43.496776 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.90s 2026-03-01 00:51:43.496781 | orchestrator | common : Creating log volume -------------------------------------------- 1.44s 2026-03-01 00:51:43.496786 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.39s 2026-03-01 00:51:43.496791 | orchestrator | common : include_tasks -------------------------------------------------- 1.20s 2026-03-01 00:51:43.496796 | orchestrator | common : include_tasks -------------------------------------------------- 1.14s 2026-03-01 00:51:43.496801 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:51:43.498284 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task 50d2cbce-1458-4b1c-9f2f-e06722897d95 is in state STARTED 2026-03-01 00:51:43.499125 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:43.500162 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:51:43.503421 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:43.504863 | orchestrator | 2026-03-01 00:51:43 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:51:43.504896 | orchestrator | 2026-03-01 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:46.531182 | orchestrator | 2026-03-01 00:51:46 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:51:46.531302 | orchestrator | 2026-03-01 00:51:46 | INFO  | Task 50d2cbce-1458-4b1c-9f2f-e06722897d95 is in state STARTED 2026-03-01 00:51:46.532030 | orchestrator | 2026-03-01 00:51:46 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:46.532524 | orchestrator | 2026-03-01 00:51:46 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:51:46.533296 | orchestrator | 2026-03-01 00:51:46 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:46.534080 | orchestrator | 2026-03-01 00:51:46 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:51:46.534148 | orchestrator | 2026-03-01 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:49.562534 | orchestrator | 2026-03-01 00:51:49 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:51:49.562623 | orchestrator | 2026-03-01 00:51:49 | INFO  | Task 50d2cbce-1458-4b1c-9f2f-e06722897d95 is in state STARTED 2026-03-01 00:51:49.563085 | orchestrator | 2026-03-01 00:51:49 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:49.563477 | orchestrator | 2026-03-01 00:51:49 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:51:49.564074 | orchestrator | 2026-03-01 00:51:49 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:49.565392 | orchestrator | 2026-03-01 00:51:49 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:51:49.565421 | orchestrator | 2026-03-01 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:52.604414 | orchestrator | 2026-03-01 00:51:52 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:51:52.604488 | orchestrator | 2026-03-01 00:51:52 | INFO  | Task 50d2cbce-1458-4b1c-9f2f-e06722897d95 is in state STARTED 2026-03-01 00:51:52.604495 | orchestrator | 2026-03-01 00:51:52 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:52.604499 | orchestrator | 2026-03-01 00:51:52 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:51:52.604503 | orchestrator | 2026-03-01 00:51:52 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:52.604507 | orchestrator | 2026-03-01 00:51:52 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:51:52.604511 | orchestrator | 2026-03-01 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:55.643753 | orchestrator | 2026-03-01 00:51:55 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:51:55.647993 | orchestrator | 2026-03-01 00:51:55 | INFO  | Task 50d2cbce-1458-4b1c-9f2f-e06722897d95 is in state STARTED 2026-03-01 00:51:55.652224 | orchestrator | 2026-03-01 00:51:55 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:55.652378 | orchestrator | 2026-03-01 00:51:55 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:51:55.652393 | orchestrator | 2026-03-01 00:51:55 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:55.652757 | orchestrator | 2026-03-01 00:51:55 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:51:55.652772 | orchestrator | 2026-03-01 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:51:58.675552 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:51:58.677720 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:51:58.677897 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task 50d2cbce-1458-4b1c-9f2f-e06722897d95 is in state SUCCESS 2026-03-01 00:51:58.678583 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:51:58.679101 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:51:58.680112 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:51:58.680841 | orchestrator | 2026-03-01 00:51:58 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:51:58.680875 | orchestrator | 2026-03-01 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:01.704506 | orchestrator | 2026-03-01 00:52:01 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:01.708502 | orchestrator | 2026-03-01 00:52:01 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:01.708871 | orchestrator | 2026-03-01 00:52:01 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:01.709329 | orchestrator | 2026-03-01 00:52:01 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:52:01.710263 | orchestrator | 2026-03-01 00:52:01 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:01.710777 | orchestrator | 2026-03-01 00:52:01 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:01.710817 | orchestrator | 2026-03-01 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:04.935405 | orchestrator | 2026-03-01 00:52:04 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:04.936223 | orchestrator | 2026-03-01 00:52:04 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:04.937403 | orchestrator | 2026-03-01 00:52:04 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:04.938636 | orchestrator | 2026-03-01 00:52:04 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:52:04.939780 | orchestrator | 2026-03-01 00:52:04 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:04.942667 | orchestrator | 2026-03-01 00:52:04 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:04.943715 | orchestrator | 2026-03-01 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:08.054249 | orchestrator | 2026-03-01 00:52:08 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:08.054657 | orchestrator | 2026-03-01 00:52:08 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:08.055302 | orchestrator | 2026-03-01 00:52:08 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:08.055736 | orchestrator | 2026-03-01 00:52:08 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state STARTED 2026-03-01 00:52:08.056519 | orchestrator | 2026-03-01 00:52:08 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:08.057269 | orchestrator | 2026-03-01 00:52:08 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:08.057307 | orchestrator | 2026-03-01 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:11.093859 | orchestrator | 2026-03-01 00:52:11 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:11.095946 | orchestrator | 2026-03-01 00:52:11 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:11.096027 | orchestrator | 2026-03-01 00:52:11 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:11.096435 | orchestrator | 2026-03-01 00:52:11 | INFO  | Task 26c10ee0-5e45-467b-9f62-d2462fca5cbf is in state SUCCESS 2026-03-01 00:52:11.097946 | orchestrator | 2026-03-01 00:52:11.097992 | orchestrator | 2026-03-01 00:52:11.098001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:52:11.098008 | orchestrator | 2026-03-01 00:52:11.098014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 00:52:11.098074 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-03-01 00:52:11.098078 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:52:11.098083 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:52:11.098088 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:52:11.098091 | orchestrator | 2026-03-01 00:52:11.098095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:52:11.098100 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:00.291) 0:00:00.539 ********** 2026-03-01 00:52:11.098104 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-01 00:52:11.098108 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-01 00:52:11.098112 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-01 00:52:11.098115 | orchestrator | 2026-03-01 00:52:11.098119 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-01 00:52:11.098124 | orchestrator | 2026-03-01 00:52:11.098127 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-01 00:52:11.098131 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:00.346) 0:00:00.885 ********** 2026-03-01 00:52:11.098135 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:52:11.098139 | orchestrator | 2026-03-01 00:52:11.098143 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-01 00:52:11.098147 | orchestrator | Sunday 01 March 2026 00:51:49 +0000 (0:00:00.547) 0:00:01.433 ********** 2026-03-01 00:52:11.098157 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-01 00:52:11.098161 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-01 00:52:11.098165 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-01 00:52:11.098169 | orchestrator | 2026-03-01 00:52:11.098173 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-01 00:52:11.098176 | orchestrator | Sunday 01 March 2026 00:51:49 +0000 (0:00:00.704) 0:00:02.137 ********** 2026-03-01 00:52:11.098180 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-01 00:52:11.098184 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-01 00:52:11.098188 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-01 00:52:11.098191 | orchestrator | 2026-03-01 00:52:11.098195 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-01 00:52:11.098199 | orchestrator | Sunday 01 March 2026 00:51:52 +0000 (0:00:02.322) 0:00:04.460 ********** 2026-03-01 00:52:11.098203 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:11.098207 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:11.098210 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:11.098214 | orchestrator | 2026-03-01 00:52:11.098218 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-01 00:52:11.098222 | orchestrator | Sunday 01 March 2026 00:51:53 +0000 (0:00:01.755) 0:00:06.215 ********** 2026-03-01 00:52:11.098225 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:11.098229 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:11.098233 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:11.098249 | orchestrator | 2026-03-01 00:52:11.098254 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:52:11.098258 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:52:11.098263 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:52:11.098266 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:52:11.098270 | orchestrator | 2026-03-01 00:52:11.098274 | orchestrator | 2026-03-01 00:52:11.098278 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:52:11.098282 | orchestrator | Sunday 01 March 2026 00:51:56 +0000 (0:00:02.234) 0:00:08.450 ********** 2026-03-01 00:52:11.098286 | orchestrator | =============================================================================== 2026-03-01 00:52:11.098289 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.32s 2026-03-01 00:52:11.098293 | orchestrator | memcached : Restart memcached container --------------------------------- 2.23s 2026-03-01 00:52:11.098297 | orchestrator | memcached : Check memcached container ----------------------------------- 1.75s 2026-03-01 00:52:11.098300 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2026-03-01 00:52:11.098304 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2026-03-01 00:52:11.098308 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-03-01 00:52:11.098312 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-01 00:52:11.098316 | orchestrator | 2026-03-01 00:52:11.098322 | orchestrator | 2026-03-01 00:52:11.098348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:52:11.098354 | orchestrator | 2026-03-01 00:52:11.098361 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 00:52:11.098366 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-03-01 00:52:11.098373 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:52:11.098380 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:52:11.098384 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:52:11.098388 | orchestrator | 2026-03-01 00:52:11.098391 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:52:11.098404 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.271) 0:00:00.550 ********** 2026-03-01 00:52:11.098408 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-01 00:52:11.098412 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-01 00:52:11.098415 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-01 00:52:11.098419 | orchestrator | 2026-03-01 00:52:11.098423 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-01 00:52:11.098427 | orchestrator | 2026-03-01 00:52:11.098430 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-01 00:52:11.098434 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:00.407) 0:00:00.957 ********** 2026-03-01 00:52:11.098438 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:52:11.098442 | orchestrator | 2026-03-01 00:52:11.098446 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-01 00:52:11.098450 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:00.396) 0:00:01.354 ********** 2026-03-01 00:52:11.098455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098506 | orchestrator | 2026-03-01 00:52:11.098513 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-01 00:52:11.098519 | orchestrator | Sunday 01 March 2026 00:51:49 +0000 (0:00:01.257) 0:00:02.611 ********** 2026-03-01 00:52:11.098526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098561 | orchestrator | 2026-03-01 00:52:11.098565 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-01 00:52:11.098569 | orchestrator | Sunday 01 March 2026 00:51:52 +0000 (0:00:03.082) 0:00:05.694 ********** 2026-03-01 00:52:11.098573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098600 | orchestrator | 2026-03-01 00:52:11.098607 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-01 00:52:11.098611 | orchestrator | Sunday 01 March 2026 00:51:55 +0000 (0:00:02.886) 0:00:08.580 ********** 2026-03-01 00:52:11.098618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-01 00:52:11.098648 | orchestrator | 2026-03-01 00:52:11.098651 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-01 00:52:11.098655 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:02.066) 0:00:10.646 ********** 2026-03-01 00:52:11.098659 | orchestrator | 2026-03-01 00:52:11.098663 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-01 00:52:11.098673 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:00.084) 0:00:10.731 ********** 2026-03-01 00:52:11.098677 | orchestrator | 2026-03-01 00:52:11.098681 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-01 00:52:11.098685 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:00.090) 0:00:10.821 ********** 2026-03-01 00:52:11.098833 | orchestrator | 2026-03-01 00:52:11.098846 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-01 00:52:11.098853 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:00.062) 0:00:10.884 ********** 2026-03-01 00:52:11.098860 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:11.098866 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:11.098871 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:11.098877 | orchestrator | 2026-03-01 00:52:11.098883 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-01 00:52:11.098889 | orchestrator | Sunday 01 March 2026 00:52:01 +0000 (0:00:03.760) 0:00:14.644 ********** 2026-03-01 00:52:11.098894 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:11.098900 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:11.098905 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:11.098910 | orchestrator | 2026-03-01 00:52:11.098916 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:52:11.098922 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:52:11.098928 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:52:11.098939 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:52:11.098946 | orchestrator | 2026-03-01 00:52:11.098952 | orchestrator | 2026-03-01 00:52:11.098958 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:52:11.098964 | orchestrator | Sunday 01 March 2026 00:52:10 +0000 (0:00:09.041) 0:00:23.686 ********** 2026-03-01 00:52:11.098971 | orchestrator | =============================================================================== 2026-03-01 00:52:11.098977 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.04s 2026-03-01 00:52:11.098983 | orchestrator | redis : Restart redis container ----------------------------------------- 3.76s 2026-03-01 00:52:11.098990 | orchestrator | redis : Copying over default config.json files -------------------------- 3.08s 2026-03-01 00:52:11.098998 | orchestrator | redis : Copying over redis config files --------------------------------- 2.89s 2026-03-01 00:52:11.099004 | orchestrator | redis : Check redis containers ------------------------------------------ 2.07s 2026-03-01 00:52:11.099010 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.26s 2026-03-01 00:52:11.099016 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-01 00:52:11.099023 | orchestrator | redis : include_tasks --------------------------------------------------- 0.40s 2026-03-01 00:52:11.099029 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-01 00:52:11.099035 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-03-01 00:52:11.099043 | orchestrator | 2026-03-01 00:52:11 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:11.099054 | orchestrator | 2026-03-01 00:52:11 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:11.099061 | orchestrator | 2026-03-01 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:14.123405 | orchestrator | 2026-03-01 00:52:14 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:14.123982 | orchestrator | 2026-03-01 00:52:14 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:14.125208 | orchestrator | 2026-03-01 00:52:14 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:14.126254 | orchestrator | 2026-03-01 00:52:14 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:14.127047 | orchestrator | 2026-03-01 00:52:14 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:14.127077 | orchestrator | 2026-03-01 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:17.159682 | orchestrator | 2026-03-01 00:52:17 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:17.159803 | orchestrator | 2026-03-01 00:52:17 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:17.160778 | orchestrator | 2026-03-01 00:52:17 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:17.162856 | orchestrator | 2026-03-01 00:52:17 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:17.165700 | orchestrator | 2026-03-01 00:52:17 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:17.165766 | orchestrator | 2026-03-01 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:20.209026 | orchestrator | 2026-03-01 00:52:20 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:20.210180 | orchestrator | 2026-03-01 00:52:20 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:20.212199 | orchestrator | 2026-03-01 00:52:20 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:20.214416 | orchestrator | 2026-03-01 00:52:20 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:20.215862 | orchestrator | 2026-03-01 00:52:20 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:20.215949 | orchestrator | 2026-03-01 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:23.261581 | orchestrator | 2026-03-01 00:52:23 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:23.261734 | orchestrator | 2026-03-01 00:52:23 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:23.264771 | orchestrator | 2026-03-01 00:52:23 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:23.265180 | orchestrator | 2026-03-01 00:52:23 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:23.265544 | orchestrator | 2026-03-01 00:52:23 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:23.265579 | orchestrator | 2026-03-01 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:26.389975 | orchestrator | 2026-03-01 00:52:26 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:26.428228 | orchestrator | 2026-03-01 00:52:26 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:26.428373 | orchestrator | 2026-03-01 00:52:26 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:26.428391 | orchestrator | 2026-03-01 00:52:26 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:26.428402 | orchestrator | 2026-03-01 00:52:26 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:26.428414 | orchestrator | 2026-03-01 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:29.429982 | orchestrator | 2026-03-01 00:52:29 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:29.431792 | orchestrator | 2026-03-01 00:52:29 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:29.433229 | orchestrator | 2026-03-01 00:52:29 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:29.434501 | orchestrator | 2026-03-01 00:52:29 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:29.436354 | orchestrator | 2026-03-01 00:52:29 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:29.436753 | orchestrator | 2026-03-01 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:32.469355 | orchestrator | 2026-03-01 00:52:32 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:32.472813 | orchestrator | 2026-03-01 00:52:32 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:32.473780 | orchestrator | 2026-03-01 00:52:32 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:32.474422 | orchestrator | 2026-03-01 00:52:32 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:32.478332 | orchestrator | 2026-03-01 00:52:32 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:32.478397 | orchestrator | 2026-03-01 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:35.515737 | orchestrator | 2026-03-01 00:52:35 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:35.515845 | orchestrator | 2026-03-01 00:52:35 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:35.516743 | orchestrator | 2026-03-01 00:52:35 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:35.518295 | orchestrator | 2026-03-01 00:52:35 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:35.519003 | orchestrator | 2026-03-01 00:52:35 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:35.519046 | orchestrator | 2026-03-01 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:38.558618 | orchestrator | 2026-03-01 00:52:38 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:38.559769 | orchestrator | 2026-03-01 00:52:38 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:38.560944 | orchestrator | 2026-03-01 00:52:38 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:38.562980 | orchestrator | 2026-03-01 00:52:38 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:38.563959 | orchestrator | 2026-03-01 00:52:38 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:38.564623 | orchestrator | 2026-03-01 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:41.593550 | orchestrator | 2026-03-01 00:52:41 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:41.593605 | orchestrator | 2026-03-01 00:52:41 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:41.594547 | orchestrator | 2026-03-01 00:52:41 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:41.594998 | orchestrator | 2026-03-01 00:52:41 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:41.595684 | orchestrator | 2026-03-01 00:52:41 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:41.595862 | orchestrator | 2026-03-01 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:44.621561 | orchestrator | 2026-03-01 00:52:44 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:44.622341 | orchestrator | 2026-03-01 00:52:44 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:44.623034 | orchestrator | 2026-03-01 00:52:44 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:44.623896 | orchestrator | 2026-03-01 00:52:44 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:44.624780 | orchestrator | 2026-03-01 00:52:44 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state STARTED 2026-03-01 00:52:44.624813 | orchestrator | 2026-03-01 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:47.654108 | orchestrator | 2026-03-01 00:52:47 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:47.655757 | orchestrator | 2026-03-01 00:52:47 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:52:47.658109 | orchestrator | 2026-03-01 00:52:47 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:47.659734 | orchestrator | 2026-03-01 00:52:47 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:47.662286 | orchestrator | 2026-03-01 00:52:47 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:47.663774 | orchestrator | 2026-03-01 00:52:47 | INFO  | Task 1994be7b-2655-4fb4-8af4-2b261423bd94 is in state SUCCESS 2026-03-01 00:52:47.663822 | orchestrator | 2026-03-01 00:52:47.665632 | orchestrator | 2026-03-01 00:52:47.665708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:52:47.665719 | orchestrator | 2026-03-01 00:52:47.665725 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 00:52:47.665731 | orchestrator | Sunday 01 March 2026 00:51:46 +0000 (0:00:00.249) 0:00:00.249 ********** 2026-03-01 00:52:47.665737 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:52:47.665742 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:52:47.665748 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:52:47.665753 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:52:47.665758 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:52:47.665764 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:52:47.665769 | orchestrator | 2026-03-01 00:52:47.665775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:52:47.665817 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.707) 0:00:00.956 ********** 2026-03-01 00:52:47.665826 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-01 00:52:47.665833 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-01 00:52:47.665838 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-01 00:52:47.665844 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-01 00:52:47.665850 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-01 00:52:47.665856 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-01 00:52:47.665861 | orchestrator | 2026-03-01 00:52:47.665867 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-01 00:52:47.665873 | orchestrator | 2026-03-01 00:52:47.665878 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-01 00:52:47.665884 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:00.619) 0:00:01.576 ********** 2026-03-01 00:52:47.665902 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:52:47.665909 | orchestrator | 2026-03-01 00:52:47.665922 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-01 00:52:47.665926 | orchestrator | Sunday 01 March 2026 00:51:49 +0000 (0:00:01.336) 0:00:02.912 ********** 2026-03-01 00:52:47.665930 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-01 00:52:47.665933 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-01 00:52:47.665936 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-01 00:52:47.665939 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-01 00:52:47.665942 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-01 00:52:47.665945 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-01 00:52:47.665948 | orchestrator | 2026-03-01 00:52:47.665951 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-01 00:52:47.665954 | orchestrator | Sunday 01 March 2026 00:51:51 +0000 (0:00:01.646) 0:00:04.558 ********** 2026-03-01 00:52:47.665957 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-01 00:52:47.665960 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-01 00:52:47.665963 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-01 00:52:47.665966 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-01 00:52:47.665969 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-01 00:52:47.665977 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-01 00:52:47.665980 | orchestrator | 2026-03-01 00:52:47.665983 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-01 00:52:47.665986 | orchestrator | Sunday 01 March 2026 00:51:52 +0000 (0:00:01.794) 0:00:06.353 ********** 2026-03-01 00:52:47.665989 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-01 00:52:47.665992 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:52:47.665996 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-01 00:52:47.665999 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:52:47.666002 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-01 00:52:47.666005 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:52:47.666008 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-01 00:52:47.666011 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-01 00:52:47.666048 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:52:47.666051 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:52:47.666054 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-01 00:52:47.666057 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:52:47.666062 | orchestrator | 2026-03-01 00:52:47.666067 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-01 00:52:47.666074 | orchestrator | Sunday 01 March 2026 00:51:54 +0000 (0:00:01.492) 0:00:07.845 ********** 2026-03-01 00:52:47.666082 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:52:47.666087 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:52:47.666092 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:52:47.666097 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:52:47.666102 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:52:47.666107 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:52:47.666112 | orchestrator | 2026-03-01 00:52:47.666118 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-01 00:52:47.666124 | orchestrator | Sunday 01 March 2026 00:51:55 +0000 (0:00:00.643) 0:00:08.489 ********** 2026-03-01 00:52:47.666144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666228 | orchestrator | 2026-03-01 00:52:47.666232 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-01 00:52:47.666236 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:02.213) 0:00:10.702 ********** 2026-03-01 00:52:47.666240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666363 | orchestrator | 2026-03-01 00:52:47.666369 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-01 00:52:47.666374 | orchestrator | Sunday 01 March 2026 00:52:00 +0000 (0:00:03.547) 0:00:14.250 ********** 2026-03-01 00:52:47.666379 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:52:47.666385 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:52:47.666390 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:52:47.666395 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:52:47.666400 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:52:47.666405 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:52:47.666410 | orchestrator | 2026-03-01 00:52:47.666416 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-01 00:52:47.666421 | orchestrator | Sunday 01 March 2026 00:52:01 +0000 (0:00:00.869) 0:00:15.119 ********** 2026-03-01 00:52:47.666541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-01 00:52:47.666608 | orchestrator | 2026-03-01 00:52:47.666611 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-01 00:52:47.666615 | orchestrator | Sunday 01 March 2026 00:52:04 +0000 (0:00:02.840) 0:00:17.959 ********** 2026-03-01 00:52:47.666618 | orchestrator | 2026-03-01 00:52:47.666621 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-01 00:52:47.666624 | orchestrator | Sunday 01 March 2026 00:52:04 +0000 (0:00:00.250) 0:00:18.210 ********** 2026-03-01 00:52:47.666627 | orchestrator | 2026-03-01 00:52:47.666630 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-01 00:52:47.666633 | orchestrator | Sunday 01 March 2026 00:52:04 +0000 (0:00:00.120) 0:00:18.331 ********** 2026-03-01 00:52:47.666636 | orchestrator | 2026-03-01 00:52:47.666639 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-01 00:52:47.666642 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:00.120) 0:00:18.451 ********** 2026-03-01 00:52:47.666645 | orchestrator | 2026-03-01 00:52:47.666648 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-01 00:52:47.666651 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:00.124) 0:00:18.575 ********** 2026-03-01 00:52:47.666654 | orchestrator | 2026-03-01 00:52:47.666657 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-01 00:52:47.666660 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:00.232) 0:00:18.808 ********** 2026-03-01 00:52:47.666666 | orchestrator | 2026-03-01 00:52:47.666669 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-01 00:52:47.666672 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:00.150) 0:00:18.958 ********** 2026-03-01 00:52:47.666675 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:47.666678 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:52:47.666681 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:52:47.666684 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:52:47.666688 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:47.666692 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:47.666695 | orchestrator | 2026-03-01 00:52:47.666698 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-01 00:52:47.666701 | orchestrator | Sunday 01 March 2026 00:52:15 +0000 (0:00:09.750) 0:00:28.709 ********** 2026-03-01 00:52:47.666704 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:52:47.666707 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:52:47.666710 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:52:47.666713 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:52:47.666716 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:52:47.666719 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:52:47.666722 | orchestrator | 2026-03-01 00:52:47.666725 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-01 00:52:47.666728 | orchestrator | Sunday 01 March 2026 00:52:16 +0000 (0:00:01.290) 0:00:29.999 ********** 2026-03-01 00:52:47.666731 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:47.666734 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:47.666737 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:47.666740 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:52:47.666743 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:52:47.666747 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:52:47.666750 | orchestrator | 2026-03-01 00:52:47.666753 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-01 00:52:47.666756 | orchestrator | Sunday 01 March 2026 00:52:21 +0000 (0:00:04.970) 0:00:34.969 ********** 2026-03-01 00:52:47.666759 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-01 00:52:47.666763 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-01 00:52:47.666766 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-01 00:52:47.666769 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-01 00:52:47.666772 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-01 00:52:47.666777 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-01 00:52:47.666780 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-01 00:52:47.666783 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-01 00:52:47.666786 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-01 00:52:47.666789 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-01 00:52:47.666792 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-01 00:52:47.666795 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-01 00:52:47.666798 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-01 00:52:47.666803 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-01 00:52:47.666806 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-01 00:52:47.666810 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-01 00:52:47.666813 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-01 00:52:47.666815 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-01 00:52:47.666818 | orchestrator | 2026-03-01 00:52:47.666822 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-01 00:52:47.666825 | orchestrator | Sunday 01 March 2026 00:52:29 +0000 (0:00:08.284) 0:00:43.253 ********** 2026-03-01 00:52:47.666828 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-01 00:52:47.666831 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:52:47.666834 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-01 00:52:47.666837 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:52:47.666840 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-01 00:52:47.666844 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:52:47.666849 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-01 00:52:47.666854 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-01 00:52:47.666859 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-01 00:52:47.666864 | orchestrator | 2026-03-01 00:52:47.666870 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-01 00:52:47.666874 | orchestrator | Sunday 01 March 2026 00:52:32 +0000 (0:00:02.873) 0:00:46.126 ********** 2026-03-01 00:52:47.666879 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-01 00:52:47.666885 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:52:47.666890 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-01 00:52:47.666895 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:52:47.666903 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-01 00:52:47.666908 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:52:47.666913 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-01 00:52:47.666916 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-01 00:52:47.666919 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-01 00:52:47.666922 | orchestrator | 2026-03-01 00:52:47.666925 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-01 00:52:47.666928 | orchestrator | Sunday 01 March 2026 00:52:36 +0000 (0:00:03.702) 0:00:49.829 ********** 2026-03-01 00:52:47.666931 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:52:47.666934 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:52:47.666937 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:52:47.666940 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:52:47.666943 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:52:47.666946 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:52:47.666950 | orchestrator | 2026-03-01 00:52:47.666953 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:52:47.666956 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 00:52:47.666960 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 00:52:47.666963 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 00:52:47.666970 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 00:52:47.666973 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 00:52:47.666979 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 00:52:47.666982 | orchestrator | 2026-03-01 00:52:47.666985 | orchestrator | 2026-03-01 00:52:47.666988 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:52:47.666991 | orchestrator | Sunday 01 March 2026 00:52:44 +0000 (0:00:08.448) 0:00:58.277 ********** 2026-03-01 00:52:47.666994 | orchestrator | =============================================================================== 2026-03-01 00:52:47.666998 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.42s 2026-03-01 00:52:47.667001 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.75s 2026-03-01 00:52:47.667004 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.28s 2026-03-01 00:52:47.667007 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.70s 2026-03-01 00:52:47.667011 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.55s 2026-03-01 00:52:47.667016 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.87s 2026-03-01 00:52:47.667021 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.84s 2026-03-01 00:52:47.667027 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.21s 2026-03-01 00:52:47.667033 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.79s 2026-03-01 00:52:47.667043 | orchestrator | module-load : Load modules ---------------------------------------------- 1.65s 2026-03-01 00:52:47.667047 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.49s 2026-03-01 00:52:47.667050 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.34s 2026-03-01 00:52:47.667053 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.29s 2026-03-01 00:52:47.667087 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.00s 2026-03-01 00:52:47.667094 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.87s 2026-03-01 00:52:47.667099 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-03-01 00:52:47.667104 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.64s 2026-03-01 00:52:47.667115 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-01 00:52:47.667121 | orchestrator | 2026-03-01 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:50.707266 | orchestrator | 2026-03-01 00:52:50 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:50.708780 | orchestrator | 2026-03-01 00:52:50 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:52:50.711184 | orchestrator | 2026-03-01 00:52:50 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:50.713329 | orchestrator | 2026-03-01 00:52:50 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:50.715359 | orchestrator | 2026-03-01 00:52:50 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:50.715442 | orchestrator | 2026-03-01 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:53.853529 | orchestrator | 2026-03-01 00:52:53 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:53.854905 | orchestrator | 2026-03-01 00:52:53 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:52:53.856274 | orchestrator | 2026-03-01 00:52:53 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:53.857236 | orchestrator | 2026-03-01 00:52:53 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:53.860922 | orchestrator | 2026-03-01 00:52:53 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:53.860953 | orchestrator | 2026-03-01 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:56.889028 | orchestrator | 2026-03-01 00:52:56 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:56.890653 | orchestrator | 2026-03-01 00:52:56 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:52:56.891229 | orchestrator | 2026-03-01 00:52:56 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:56.892020 | orchestrator | 2026-03-01 00:52:56 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:56.892678 | orchestrator | 2026-03-01 00:52:56 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:56.892696 | orchestrator | 2026-03-01 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:52:59.918269 | orchestrator | 2026-03-01 00:52:59 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:52:59.918537 | orchestrator | 2026-03-01 00:52:59 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:52:59.919033 | orchestrator | 2026-03-01 00:52:59 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:52:59.919550 | orchestrator | 2026-03-01 00:52:59 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:52:59.920068 | orchestrator | 2026-03-01 00:52:59 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:52:59.920102 | orchestrator | 2026-03-01 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:02.947460 | orchestrator | 2026-03-01 00:53:02 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:02.947790 | orchestrator | 2026-03-01 00:53:02 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:02.949876 | orchestrator | 2026-03-01 00:53:02 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:02.951828 | orchestrator | 2026-03-01 00:53:02 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:02.954379 | orchestrator | 2026-03-01 00:53:02 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:02.954434 | orchestrator | 2026-03-01 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:05.999478 | orchestrator | 2026-03-01 00:53:05 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:05.999571 | orchestrator | 2026-03-01 00:53:05 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:06.000211 | orchestrator | 2026-03-01 00:53:05 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:06.001463 | orchestrator | 2026-03-01 00:53:06 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:06.001872 | orchestrator | 2026-03-01 00:53:06 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:06.001943 | orchestrator | 2026-03-01 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:09.032591 | orchestrator | 2026-03-01 00:53:09 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:09.033592 | orchestrator | 2026-03-01 00:53:09 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:09.034544 | orchestrator | 2026-03-01 00:53:09 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:09.036855 | orchestrator | 2026-03-01 00:53:09 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:09.037471 | orchestrator | 2026-03-01 00:53:09 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:09.037499 | orchestrator | 2026-03-01 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:12.073368 | orchestrator | 2026-03-01 00:53:12 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:12.074211 | orchestrator | 2026-03-01 00:53:12 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:12.075083 | orchestrator | 2026-03-01 00:53:12 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:12.076035 | orchestrator | 2026-03-01 00:53:12 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:12.077191 | orchestrator | 2026-03-01 00:53:12 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:12.077223 | orchestrator | 2026-03-01 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:15.110862 | orchestrator | 2026-03-01 00:53:15 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:15.113643 | orchestrator | 2026-03-01 00:53:15 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:15.115748 | orchestrator | 2026-03-01 00:53:15 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:15.118214 | orchestrator | 2026-03-01 00:53:15 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:15.121354 | orchestrator | 2026-03-01 00:53:15 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:15.121541 | orchestrator | 2026-03-01 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:18.418848 | orchestrator | 2026-03-01 00:53:18 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:18.419416 | orchestrator | 2026-03-01 00:53:18 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:18.419907 | orchestrator | 2026-03-01 00:53:18 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:18.420554 | orchestrator | 2026-03-01 00:53:18 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:18.421138 | orchestrator | 2026-03-01 00:53:18 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:18.421176 | orchestrator | 2026-03-01 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:21.458990 | orchestrator | 2026-03-01 00:53:21 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:21.460068 | orchestrator | 2026-03-01 00:53:21 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:21.460591 | orchestrator | 2026-03-01 00:53:21 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:21.462624 | orchestrator | 2026-03-01 00:53:21 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:21.463249 | orchestrator | 2026-03-01 00:53:21 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:21.463332 | orchestrator | 2026-03-01 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:24.549349 | orchestrator | 2026-03-01 00:53:24 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:24.549438 | orchestrator | 2026-03-01 00:53:24 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:24.549448 | orchestrator | 2026-03-01 00:53:24 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:24.549456 | orchestrator | 2026-03-01 00:53:24 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:24.549464 | orchestrator | 2026-03-01 00:53:24 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:24.549472 | orchestrator | 2026-03-01 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:27.558780 | orchestrator | 2026-03-01 00:53:27 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:27.561839 | orchestrator | 2026-03-01 00:53:27 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:27.562528 | orchestrator | 2026-03-01 00:53:27 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:27.564499 | orchestrator | 2026-03-01 00:53:27 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:27.564805 | orchestrator | 2026-03-01 00:53:27 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:27.564873 | orchestrator | 2026-03-01 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:30.599126 | orchestrator | 2026-03-01 00:53:30 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:30.599529 | orchestrator | 2026-03-01 00:53:30 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:30.603041 | orchestrator | 2026-03-01 00:53:30 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:30.608528 | orchestrator | 2026-03-01 00:53:30 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:30.610148 | orchestrator | 2026-03-01 00:53:30 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:30.610220 | orchestrator | 2026-03-01 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:33.657654 | orchestrator | 2026-03-01 00:53:33 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:33.664637 | orchestrator | 2026-03-01 00:53:33 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:33.670431 | orchestrator | 2026-03-01 00:53:33 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:33.676223 | orchestrator | 2026-03-01 00:53:33 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:33.678548 | orchestrator | 2026-03-01 00:53:33 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:33.678695 | orchestrator | 2026-03-01 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:37.021190 | orchestrator | 2026-03-01 00:53:37 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:37.021244 | orchestrator | 2026-03-01 00:53:37 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:37.021299 | orchestrator | 2026-03-01 00:53:37 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:37.021511 | orchestrator | 2026-03-01 00:53:37 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:37.023726 | orchestrator | 2026-03-01 00:53:37 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:37.023771 | orchestrator | 2026-03-01 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:40.059812 | orchestrator | 2026-03-01 00:53:40 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:40.060504 | orchestrator | 2026-03-01 00:53:40 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:40.061340 | orchestrator | 2026-03-01 00:53:40 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:40.061845 | orchestrator | 2026-03-01 00:53:40 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:40.062930 | orchestrator | 2026-03-01 00:53:40 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:40.062970 | orchestrator | 2026-03-01 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:43.101006 | orchestrator | 2026-03-01 00:53:43 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:43.101599 | orchestrator | 2026-03-01 00:53:43 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:43.101636 | orchestrator | 2026-03-01 00:53:43 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:43.101654 | orchestrator | 2026-03-01 00:53:43 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:43.103468 | orchestrator | 2026-03-01 00:53:43 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state STARTED 2026-03-01 00:53:43.103513 | orchestrator | 2026-03-01 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:46.150837 | orchestrator | 2026-03-01 00:53:46 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:46.151315 | orchestrator | 2026-03-01 00:53:46 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:46.152117 | orchestrator | 2026-03-01 00:53:46 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:46.153035 | orchestrator | 2026-03-01 00:53:46 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:46.156968 | orchestrator | 2026-03-01 00:53:46 | INFO  | Task 208a93d0-7e6c-4568-a3a4-ba4ef22f51bf is in state SUCCESS 2026-03-01 00:53:46.157948 | orchestrator | 2026-03-01 00:53:46.157973 | orchestrator | 2026-03-01 00:53:46.157981 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-01 00:53:46.157989 | orchestrator | 2026-03-01 00:53:46.157996 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-01 00:53:46.158002 | orchestrator | Sunday 01 March 2026 00:49:25 +0000 (0:00:00.193) 0:00:00.193 ********** 2026-03-01 00:53:46.158009 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:53:46.158049 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:53:46.158057 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:53:46.158067 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.158079 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.158093 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.158105 | orchestrator | 2026-03-01 00:53:46.158119 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-01 00:53:46.158153 | orchestrator | Sunday 01 March 2026 00:49:26 +0000 (0:00:00.619) 0:00:00.812 ********** 2026-03-01 00:53:46.158175 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158184 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158190 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158197 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158204 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158211 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158217 | orchestrator | 2026-03-01 00:53:46.158225 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-01 00:53:46.158232 | orchestrator | Sunday 01 March 2026 00:49:26 +0000 (0:00:00.622) 0:00:01.435 ********** 2026-03-01 00:53:46.158249 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158255 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158262 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158268 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158273 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158276 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158280 | orchestrator | 2026-03-01 00:53:46.158284 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-01 00:53:46.158288 | orchestrator | Sunday 01 March 2026 00:49:27 +0000 (0:00:00.650) 0:00:02.085 ********** 2026-03-01 00:53:46.158291 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.158295 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.158299 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.158306 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.158312 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.158318 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.158324 | orchestrator | 2026-03-01 00:53:46.158330 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-01 00:53:46.158337 | orchestrator | Sunday 01 March 2026 00:49:29 +0000 (0:00:01.980) 0:00:04.065 ********** 2026-03-01 00:53:46.158343 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.158349 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.158356 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.158362 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.158368 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.158375 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.158381 | orchestrator | 2026-03-01 00:53:46.158385 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-01 00:53:46.158389 | orchestrator | Sunday 01 March 2026 00:49:30 +0000 (0:00:01.067) 0:00:05.133 ********** 2026-03-01 00:53:46.158393 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.158397 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.158401 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.158405 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.158409 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.158412 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.158416 | orchestrator | 2026-03-01 00:53:46.158420 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-01 00:53:46.158424 | orchestrator | Sunday 01 March 2026 00:49:31 +0000 (0:00:01.046) 0:00:06.180 ********** 2026-03-01 00:53:46.158427 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158431 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158435 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158438 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158442 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158447 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158453 | orchestrator | 2026-03-01 00:53:46.158459 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-01 00:53:46.158466 | orchestrator | Sunday 01 March 2026 00:49:32 +0000 (0:00:00.846) 0:00:07.026 ********** 2026-03-01 00:53:46.158472 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158478 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158490 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158497 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158503 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158509 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158515 | orchestrator | 2026-03-01 00:53:46.158521 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-01 00:53:46.158528 | orchestrator | Sunday 01 March 2026 00:49:33 +0000 (0:00:00.603) 0:00:07.630 ********** 2026-03-01 00:53:46.158534 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 00:53:46.158540 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 00:53:46.158547 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158553 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 00:53:46.158559 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 00:53:46.158565 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158571 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 00:53:46.158586 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 00:53:46.158593 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158598 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 00:53:46.158612 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 00:53:46.158619 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158626 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 00:53:46.158632 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 00:53:46.158639 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158645 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 00:53:46.158651 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 00:53:46.158657 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158663 | orchestrator | 2026-03-01 00:53:46.158669 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-01 00:53:46.158676 | orchestrator | Sunday 01 March 2026 00:49:33 +0000 (0:00:00.715) 0:00:08.345 ********** 2026-03-01 00:53:46.158682 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158688 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158694 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158700 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158706 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158713 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158719 | orchestrator | 2026-03-01 00:53:46.158726 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-01 00:53:46.158746 | orchestrator | Sunday 01 March 2026 00:49:35 +0000 (0:00:01.525) 0:00:09.871 ********** 2026-03-01 00:53:46.158752 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:53:46.158758 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:53:46.158765 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:53:46.158771 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.158778 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.158784 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.158790 | orchestrator | 2026-03-01 00:53:46.158796 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-01 00:53:46.158803 | orchestrator | Sunday 01 March 2026 00:49:36 +0000 (0:00:00.761) 0:00:10.633 ********** 2026-03-01 00:53:46.158809 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.158815 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.158821 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.158828 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.158839 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.158845 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.158851 | orchestrator | 2026-03-01 00:53:46.158858 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-01 00:53:46.158864 | orchestrator | Sunday 01 March 2026 00:49:41 +0000 (0:00:05.289) 0:00:15.923 ********** 2026-03-01 00:53:46.158870 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158877 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158883 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158889 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158896 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158902 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158908 | orchestrator | 2026-03-01 00:53:46.158915 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-01 00:53:46.158921 | orchestrator | Sunday 01 March 2026 00:49:42 +0000 (0:00:01.416) 0:00:17.339 ********** 2026-03-01 00:53:46.158927 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158934 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158940 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.158946 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.158953 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.158959 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.158965 | orchestrator | 2026-03-01 00:53:46.158972 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-01 00:53:46.158979 | orchestrator | Sunday 01 March 2026 00:49:44 +0000 (0:00:01.612) 0:00:18.952 ********** 2026-03-01 00:53:46.158985 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.158992 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.158998 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.159004 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159010 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159016 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159023 | orchestrator | 2026-03-01 00:53:46.159029 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-01 00:53:46.159036 | orchestrator | Sunday 01 March 2026 00:49:45 +0000 (0:00:01.166) 0:00:20.119 ********** 2026-03-01 00:53:46.159042 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-01 00:53:46.159049 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-01 00:53:46.159055 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.159062 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-01 00:53:46.159068 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-01 00:53:46.159075 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.159081 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-01 00:53:46.159087 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-01 00:53:46.159093 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.159100 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-01 00:53:46.159106 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-01 00:53:46.159113 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159119 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-01 00:53:46.159125 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-01 00:53:46.159132 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159142 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-01 00:53:46.159148 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-01 00:53:46.159154 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159160 | orchestrator | 2026-03-01 00:53:46.159167 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-01 00:53:46.159177 | orchestrator | Sunday 01 March 2026 00:49:47 +0000 (0:00:02.076) 0:00:22.195 ********** 2026-03-01 00:53:46.159188 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.159195 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.159202 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.159208 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159215 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159221 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159228 | orchestrator | 2026-03-01 00:53:46.159235 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-01 00:53:46.159256 | orchestrator | Sunday 01 March 2026 00:49:48 +0000 (0:00:01.288) 0:00:23.484 ********** 2026-03-01 00:53:46.159262 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.159268 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.159275 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.159281 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159287 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159293 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159300 | orchestrator | 2026-03-01 00:53:46.159307 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-01 00:53:46.159313 | orchestrator | 2026-03-01 00:53:46.159318 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-01 00:53:46.159324 | orchestrator | Sunday 01 March 2026 00:49:50 +0000 (0:00:01.299) 0:00:24.783 ********** 2026-03-01 00:53:46.159330 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159336 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159343 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159349 | orchestrator | 2026-03-01 00:53:46.159356 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-01 00:53:46.159362 | orchestrator | Sunday 01 March 2026 00:49:52 +0000 (0:00:01.911) 0:00:26.695 ********** 2026-03-01 00:53:46.159368 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159374 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159380 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159386 | orchestrator | 2026-03-01 00:53:46.159427 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-01 00:53:46.159434 | orchestrator | Sunday 01 March 2026 00:49:53 +0000 (0:00:01.437) 0:00:28.132 ********** 2026-03-01 00:53:46.159441 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159446 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159450 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159454 | orchestrator | 2026-03-01 00:53:46.159458 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-01 00:53:46.159461 | orchestrator | Sunday 01 March 2026 00:49:54 +0000 (0:00:00.969) 0:00:29.102 ********** 2026-03-01 00:53:46.159465 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159469 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159473 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159476 | orchestrator | 2026-03-01 00:53:46.159480 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-01 00:53:46.159484 | orchestrator | Sunday 01 March 2026 00:49:55 +0000 (0:00:00.822) 0:00:29.926 ********** 2026-03-01 00:53:46.159488 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159491 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159495 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159499 | orchestrator | 2026-03-01 00:53:46.159503 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-01 00:53:46.159507 | orchestrator | Sunday 01 March 2026 00:49:56 +0000 (0:00:00.621) 0:00:30.547 ********** 2026-03-01 00:53:46.159511 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.159514 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.159518 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.159522 | orchestrator | 2026-03-01 00:53:46.159526 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-01 00:53:46.159529 | orchestrator | Sunday 01 March 2026 00:49:57 +0000 (0:00:01.259) 0:00:31.806 ********** 2026-03-01 00:53:46.159537 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.159541 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.159545 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.159548 | orchestrator | 2026-03-01 00:53:46.159552 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-01 00:53:46.159556 | orchestrator | Sunday 01 March 2026 00:49:59 +0000 (0:00:02.010) 0:00:33.817 ********** 2026-03-01 00:53:46.159560 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:53:46.159564 | orchestrator | 2026-03-01 00:53:46.159567 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-01 00:53:46.159571 | orchestrator | Sunday 01 March 2026 00:49:59 +0000 (0:00:00.584) 0:00:34.402 ********** 2026-03-01 00:53:46.159575 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159579 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159585 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159591 | orchestrator | 2026-03-01 00:53:46.159597 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-01 00:53:46.159604 | orchestrator | Sunday 01 March 2026 00:50:03 +0000 (0:00:04.007) 0:00:38.409 ********** 2026-03-01 00:53:46.159610 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159617 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159623 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.159629 | orchestrator | 2026-03-01 00:53:46.159635 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-01 00:53:46.159642 | orchestrator | Sunday 01 March 2026 00:50:05 +0000 (0:00:01.181) 0:00:39.590 ********** 2026-03-01 00:53:46.159648 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159655 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159661 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.159667 | orchestrator | 2026-03-01 00:53:46.159674 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-01 00:53:46.159689 | orchestrator | Sunday 01 March 2026 00:50:06 +0000 (0:00:01.072) 0:00:40.663 ********** 2026-03-01 00:53:46.159696 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159702 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159708 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.159715 | orchestrator | 2026-03-01 00:53:46.159721 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-01 00:53:46.159732 | orchestrator | Sunday 01 March 2026 00:50:07 +0000 (0:00:01.678) 0:00:42.341 ********** 2026-03-01 00:53:46.159738 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159745 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159751 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159758 | orchestrator | 2026-03-01 00:53:46.159764 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-01 00:53:46.159771 | orchestrator | Sunday 01 March 2026 00:50:08 +0000 (0:00:00.789) 0:00:43.130 ********** 2026-03-01 00:53:46.159777 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.159783 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.159790 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.159796 | orchestrator | 2026-03-01 00:53:46.159803 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-01 00:53:46.159809 | orchestrator | Sunday 01 March 2026 00:50:08 +0000 (0:00:00.391) 0:00:43.522 ********** 2026-03-01 00:53:46.159816 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.159822 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.159828 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.159834 | orchestrator | 2026-03-01 00:53:46.159841 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-01 00:53:46.159847 | orchestrator | Sunday 01 March 2026 00:50:11 +0000 (0:00:02.243) 0:00:45.766 ********** 2026-03-01 00:53:46.159853 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159860 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159870 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159877 | orchestrator | 2026-03-01 00:53:46.159883 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-01 00:53:46.159889 | orchestrator | Sunday 01 March 2026 00:50:13 +0000 (0:00:02.600) 0:00:48.367 ********** 2026-03-01 00:53:46.159896 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.159902 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.159909 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.159915 | orchestrator | 2026-03-01 00:53:46.159921 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-01 00:53:46.159928 | orchestrator | Sunday 01 March 2026 00:50:14 +0000 (0:00:00.685) 0:00:49.052 ********** 2026-03-01 00:53:46.159935 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-01 00:53:46.159942 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-01 00:53:46.159948 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-01 00:53:46.159955 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-01 00:53:46.159961 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-01 00:53:46.159967 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-01 00:53:46.159974 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-01 00:53:46.159980 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-01 00:53:46.159987 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-01 00:53:46.159994 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-01 00:53:46.160000 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-01 00:53:46.160006 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-01 00:53:46.160012 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160019 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160026 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160033 | orchestrator | 2026-03-01 00:53:46.160039 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-01 00:53:46.160046 | orchestrator | Sunday 01 March 2026 00:50:58 +0000 (0:00:43.695) 0:01:32.748 ********** 2026-03-01 00:53:46.160053 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.160059 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.160065 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.160072 | orchestrator | 2026-03-01 00:53:46.160077 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-01 00:53:46.160084 | orchestrator | Sunday 01 March 2026 00:50:58 +0000 (0:00:00.417) 0:01:33.166 ********** 2026-03-01 00:53:46.160090 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160096 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160102 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160108 | orchestrator | 2026-03-01 00:53:46.160118 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-01 00:53:46.160129 | orchestrator | Sunday 01 March 2026 00:51:00 +0000 (0:00:01.668) 0:01:34.834 ********** 2026-03-01 00:53:46.160136 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160142 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160146 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160150 | orchestrator | 2026-03-01 00:53:46.160159 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-01 00:53:46.160166 | orchestrator | Sunday 01 March 2026 00:51:01 +0000 (0:00:01.586) 0:01:36.420 ********** 2026-03-01 00:53:46.160172 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160187 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160193 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160200 | orchestrator | 2026-03-01 00:53:46.160207 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-01 00:53:46.160213 | orchestrator | Sunday 01 March 2026 00:51:27 +0000 (0:00:25.917) 0:02:02.338 ********** 2026-03-01 00:53:46.160220 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160227 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160233 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160249 | orchestrator | 2026-03-01 00:53:46.160256 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-01 00:53:46.160263 | orchestrator | Sunday 01 March 2026 00:51:28 +0000 (0:00:00.667) 0:02:03.006 ********** 2026-03-01 00:53:46.160269 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160276 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160282 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160288 | orchestrator | 2026-03-01 00:53:46.160295 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-01 00:53:46.160301 | orchestrator | Sunday 01 March 2026 00:51:29 +0000 (0:00:00.682) 0:02:03.689 ********** 2026-03-01 00:53:46.160307 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160314 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160320 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160327 | orchestrator | 2026-03-01 00:53:46.160333 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-01 00:53:46.160339 | orchestrator | Sunday 01 March 2026 00:51:29 +0000 (0:00:00.730) 0:02:04.420 ********** 2026-03-01 00:53:46.160346 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160352 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160359 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160365 | orchestrator | 2026-03-01 00:53:46.160371 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-01 00:53:46.160378 | orchestrator | Sunday 01 March 2026 00:51:30 +0000 (0:00:00.844) 0:02:05.264 ********** 2026-03-01 00:53:46.160384 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160391 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160397 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160403 | orchestrator | 2026-03-01 00:53:46.160410 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-01 00:53:46.160416 | orchestrator | Sunday 01 March 2026 00:51:31 +0000 (0:00:00.303) 0:02:05.567 ********** 2026-03-01 00:53:46.160423 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160429 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160435 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160442 | orchestrator | 2026-03-01 00:53:46.160448 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-01 00:53:46.160455 | orchestrator | Sunday 01 March 2026 00:51:31 +0000 (0:00:00.680) 0:02:06.248 ********** 2026-03-01 00:53:46.160461 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160467 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160474 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160480 | orchestrator | 2026-03-01 00:53:46.160487 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-01 00:53:46.160493 | orchestrator | Sunday 01 March 2026 00:51:32 +0000 (0:00:00.700) 0:02:06.949 ********** 2026-03-01 00:53:46.160504 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160511 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160517 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160523 | orchestrator | 2026-03-01 00:53:46.160530 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-01 00:53:46.160536 | orchestrator | Sunday 01 March 2026 00:51:33 +0000 (0:00:01.082) 0:02:08.032 ********** 2026-03-01 00:53:46.160543 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:53:46.160549 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:53:46.160556 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:53:46.160562 | orchestrator | 2026-03-01 00:53:46.160569 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-01 00:53:46.160575 | orchestrator | Sunday 01 March 2026 00:51:34 +0000 (0:00:00.795) 0:02:08.827 ********** 2026-03-01 00:53:46.160582 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.160588 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.160595 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.160602 | orchestrator | 2026-03-01 00:53:46.160608 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-01 00:53:46.160615 | orchestrator | Sunday 01 March 2026 00:51:34 +0000 (0:00:00.260) 0:02:09.087 ********** 2026-03-01 00:53:46.160622 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.160628 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.160635 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.160642 | orchestrator | 2026-03-01 00:53:46.160648 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-01 00:53:46.160654 | orchestrator | Sunday 01 March 2026 00:51:34 +0000 (0:00:00.233) 0:02:09.321 ********** 2026-03-01 00:53:46.160661 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160667 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160674 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160680 | orchestrator | 2026-03-01 00:53:46.160687 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-01 00:53:46.160693 | orchestrator | Sunday 01 March 2026 00:51:35 +0000 (0:00:00.732) 0:02:10.054 ********** 2026-03-01 00:53:46.160699 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.160703 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.160707 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.160711 | orchestrator | 2026-03-01 00:53:46.160715 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-01 00:53:46.160719 | orchestrator | Sunday 01 March 2026 00:51:36 +0000 (0:00:00.587) 0:02:10.641 ********** 2026-03-01 00:53:46.160723 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-01 00:53:46.160733 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-01 00:53:46.160741 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-01 00:53:46.160747 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-01 00:53:46.160754 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-01 00:53:46.160760 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-01 00:53:46.160766 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-01 00:53:46.160773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-01 00:53:46.160779 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-01 00:53:46.160786 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-01 00:53:46.160793 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-01 00:53:46.160801 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-01 00:53:46.160804 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-01 00:53:46.160809 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-01 00:53:46.160815 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-01 00:53:46.160821 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-01 00:53:46.160827 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-01 00:53:46.160834 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-01 00:53:46.160840 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-01 00:53:46.160846 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-01 00:53:46.160852 | orchestrator | 2026-03-01 00:53:46.160859 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-01 00:53:46.160865 | orchestrator | 2026-03-01 00:53:46.160871 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-01 00:53:46.160878 | orchestrator | Sunday 01 March 2026 00:51:38 +0000 (0:00:02.855) 0:02:13.497 ********** 2026-03-01 00:53:46.160884 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:53:46.160890 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:53:46.160897 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:53:46.160903 | orchestrator | 2026-03-01 00:53:46.160910 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-01 00:53:46.160916 | orchestrator | Sunday 01 March 2026 00:51:39 +0000 (0:00:00.499) 0:02:13.997 ********** 2026-03-01 00:53:46.160923 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:53:46.160929 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:53:46.160936 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:53:46.160942 | orchestrator | 2026-03-01 00:53:46.160949 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-01 00:53:46.160955 | orchestrator | Sunday 01 March 2026 00:51:40 +0000 (0:00:00.629) 0:02:14.627 ********** 2026-03-01 00:53:46.160961 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:53:46.160968 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:53:46.160974 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:53:46.160981 | orchestrator | 2026-03-01 00:53:46.160987 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-01 00:53:46.160994 | orchestrator | Sunday 01 March 2026 00:51:40 +0000 (0:00:00.335) 0:02:14.963 ********** 2026-03-01 00:53:46.161000 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 00:53:46.161007 | orchestrator | 2026-03-01 00:53:46.161013 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-01 00:53:46.161019 | orchestrator | Sunday 01 March 2026 00:51:41 +0000 (0:00:00.690) 0:02:15.653 ********** 2026-03-01 00:53:46.161026 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.161032 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.161039 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.161045 | orchestrator | 2026-03-01 00:53:46.161051 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-01 00:53:46.161058 | orchestrator | Sunday 01 March 2026 00:51:41 +0000 (0:00:00.293) 0:02:15.947 ********** 2026-03-01 00:53:46.161064 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.161071 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.161077 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.161084 | orchestrator | 2026-03-01 00:53:46.161090 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-01 00:53:46.161097 | orchestrator | Sunday 01 March 2026 00:51:41 +0000 (0:00:00.376) 0:02:16.323 ********** 2026-03-01 00:53:46.161107 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.161114 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.161120 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.161127 | orchestrator | 2026-03-01 00:53:46.161557 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-01 00:53:46.161576 | orchestrator | Sunday 01 March 2026 00:51:42 +0000 (0:00:00.294) 0:02:16.618 ********** 2026-03-01 00:53:46.161580 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.161583 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.161587 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.161591 | orchestrator | 2026-03-01 00:53:46.161604 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-01 00:53:46.161608 | orchestrator | Sunday 01 March 2026 00:51:43 +0000 (0:00:00.942) 0:02:17.560 ********** 2026-03-01 00:53:46.161612 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.161616 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.161619 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.161623 | orchestrator | 2026-03-01 00:53:46.161627 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-01 00:53:46.161631 | orchestrator | Sunday 01 March 2026 00:51:44 +0000 (0:00:01.245) 0:02:18.806 ********** 2026-03-01 00:53:46.161634 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.161638 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.161642 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.161646 | orchestrator | 2026-03-01 00:53:46.161649 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-01 00:53:46.161653 | orchestrator | Sunday 01 March 2026 00:51:45 +0000 (0:00:01.510) 0:02:20.316 ********** 2026-03-01 00:53:46.161657 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:53:46.161660 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:53:46.161664 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:53:46.161668 | orchestrator | 2026-03-01 00:53:46.161671 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-01 00:53:46.161675 | orchestrator | 2026-03-01 00:53:46.161679 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-01 00:53:46.161683 | orchestrator | Sunday 01 March 2026 00:51:55 +0000 (0:00:10.083) 0:02:30.400 ********** 2026-03-01 00:53:46.161686 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.161690 | orchestrator | 2026-03-01 00:53:46.161694 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-01 00:53:46.161698 | orchestrator | Sunday 01 March 2026 00:51:56 +0000 (0:00:00.726) 0:02:31.126 ********** 2026-03-01 00:53:46.161701 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.161705 | orchestrator | 2026-03-01 00:53:46.161709 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-01 00:53:46.161712 | orchestrator | Sunday 01 March 2026 00:51:56 +0000 (0:00:00.383) 0:02:31.509 ********** 2026-03-01 00:53:46.161716 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-01 00:53:46.161720 | orchestrator | 2026-03-01 00:53:46.161724 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-01 00:53:46.161727 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:00.521) 0:02:32.031 ********** 2026-03-01 00:53:46.161731 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.161735 | orchestrator | 2026-03-01 00:53:46.161738 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-01 00:53:46.161742 | orchestrator | Sunday 01 March 2026 00:51:58 +0000 (0:00:00.856) 0:02:32.887 ********** 2026-03-01 00:53:46.161746 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.161749 | orchestrator | 2026-03-01 00:53:46.161753 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-01 00:53:46.161757 | orchestrator | Sunday 01 March 2026 00:51:58 +0000 (0:00:00.543) 0:02:33.430 ********** 2026-03-01 00:53:46.161763 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-01 00:53:46.161776 | orchestrator | 2026-03-01 00:53:46.161782 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-01 00:53:46.161789 | orchestrator | Sunday 01 March 2026 00:52:00 +0000 (0:00:01.677) 0:02:35.108 ********** 2026-03-01 00:53:46.161795 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-01 00:53:46.161801 | orchestrator | 2026-03-01 00:53:46.161807 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-01 00:53:46.161813 | orchestrator | Sunday 01 March 2026 00:52:01 +0000 (0:00:00.842) 0:02:35.951 ********** 2026-03-01 00:53:46.161819 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.161826 | orchestrator | 2026-03-01 00:53:46.161832 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-01 00:53:46.161839 | orchestrator | Sunday 01 March 2026 00:52:01 +0000 (0:00:00.484) 0:02:36.435 ********** 2026-03-01 00:53:46.161845 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.161851 | orchestrator | 2026-03-01 00:53:46.161857 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-01 00:53:46.161863 | orchestrator | 2026-03-01 00:53:46.161870 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-01 00:53:46.161876 | orchestrator | Sunday 01 March 2026 00:52:02 +0000 (0:00:00.347) 0:02:36.783 ********** 2026-03-01 00:53:46.161882 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.161888 | orchestrator | 2026-03-01 00:53:46.161895 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-01 00:53:46.161901 | orchestrator | Sunday 01 March 2026 00:52:02 +0000 (0:00:00.112) 0:02:36.896 ********** 2026-03-01 00:53:46.161907 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-01 00:53:46.161914 | orchestrator | 2026-03-01 00:53:46.161920 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-01 00:53:46.161926 | orchestrator | Sunday 01 March 2026 00:52:02 +0000 (0:00:00.189) 0:02:37.085 ********** 2026-03-01 00:53:46.161933 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.161939 | orchestrator | 2026-03-01 00:53:46.161945 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-01 00:53:46.161952 | orchestrator | Sunday 01 March 2026 00:52:03 +0000 (0:00:00.713) 0:02:37.799 ********** 2026-03-01 00:53:46.161958 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.161965 | orchestrator | 2026-03-01 00:53:46.161971 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-01 00:53:46.161978 | orchestrator | Sunday 01 March 2026 00:52:04 +0000 (0:00:01.358) 0:02:39.158 ********** 2026-03-01 00:53:46.161984 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.161990 | orchestrator | 2026-03-01 00:53:46.161997 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-01 00:53:46.162003 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:00.864) 0:02:40.023 ********** 2026-03-01 00:53:46.162008 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.162057 | orchestrator | 2026-03-01 00:53:46.162073 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-01 00:53:46.162081 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:00.504) 0:02:40.527 ********** 2026-03-01 00:53:46.162088 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.162094 | orchestrator | 2026-03-01 00:53:46.162102 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-01 00:53:46.162109 | orchestrator | Sunday 01 March 2026 00:52:12 +0000 (0:00:06.858) 0:02:47.385 ********** 2026-03-01 00:53:46.162116 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.162123 | orchestrator | 2026-03-01 00:53:46.162130 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-01 00:53:46.162138 | orchestrator | Sunday 01 March 2026 00:52:25 +0000 (0:00:12.216) 0:02:59.602 ********** 2026-03-01 00:53:46.162145 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.162153 | orchestrator | 2026-03-01 00:53:46.162160 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-01 00:53:46.162172 | orchestrator | 2026-03-01 00:53:46.162179 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-01 00:53:46.162187 | orchestrator | Sunday 01 March 2026 00:52:25 +0000 (0:00:00.484) 0:03:00.086 ********** 2026-03-01 00:53:46.162194 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.162201 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.162208 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.162214 | orchestrator | 2026-03-01 00:53:46.162221 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-01 00:53:46.162228 | orchestrator | Sunday 01 March 2026 00:52:25 +0000 (0:00:00.302) 0:03:00.389 ********** 2026-03-01 00:53:46.162232 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162236 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.162256 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.162263 | orchestrator | 2026-03-01 00:53:46.162267 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-01 00:53:46.162271 | orchestrator | Sunday 01 March 2026 00:52:26 +0000 (0:00:00.305) 0:03:00.695 ********** 2026-03-01 00:53:46.162274 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-01 00:53:46.162278 | orchestrator | 2026-03-01 00:53:46.162282 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-01 00:53:46.162286 | orchestrator | Sunday 01 March 2026 00:52:26 +0000 (0:00:00.806) 0:03:01.502 ********** 2026-03-01 00:53:46.162290 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162293 | orchestrator | 2026-03-01 00:53:46.162297 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-01 00:53:46.162301 | orchestrator | Sunday 01 March 2026 00:52:27 +0000 (0:00:00.911) 0:03:02.413 ********** 2026-03-01 00:53:46.162307 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162313 | orchestrator | 2026-03-01 00:53:46.162319 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-01 00:53:46.162326 | orchestrator | Sunday 01 March 2026 00:52:28 +0000 (0:00:00.887) 0:03:03.301 ********** 2026-03-01 00:53:46.162332 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162339 | orchestrator | 2026-03-01 00:53:46.162345 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-01 00:53:46.162352 | orchestrator | Sunday 01 March 2026 00:52:28 +0000 (0:00:00.122) 0:03:03.423 ********** 2026-03-01 00:53:46.162359 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162363 | orchestrator | 2026-03-01 00:53:46.162367 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-01 00:53:46.162371 | orchestrator | Sunday 01 March 2026 00:52:29 +0000 (0:00:01.112) 0:03:04.535 ********** 2026-03-01 00:53:46.162375 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162378 | orchestrator | 2026-03-01 00:53:46.162382 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-01 00:53:46.162386 | orchestrator | Sunday 01 March 2026 00:52:30 +0000 (0:00:00.279) 0:03:04.815 ********** 2026-03-01 00:53:46.162390 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162393 | orchestrator | 2026-03-01 00:53:46.162397 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-01 00:53:46.162401 | orchestrator | Sunday 01 March 2026 00:52:30 +0000 (0:00:00.124) 0:03:04.940 ********** 2026-03-01 00:53:46.162405 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162408 | orchestrator | 2026-03-01 00:53:46.162412 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-01 00:53:46.162416 | orchestrator | Sunday 01 March 2026 00:52:30 +0000 (0:00:00.110) 0:03:05.050 ********** 2026-03-01 00:53:46.162420 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162423 | orchestrator | 2026-03-01 00:53:46.162427 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-01 00:53:46.162434 | orchestrator | Sunday 01 March 2026 00:52:30 +0000 (0:00:00.129) 0:03:05.179 ********** 2026-03-01 00:53:46.162438 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162442 | orchestrator | 2026-03-01 00:53:46.162445 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-01 00:53:46.162449 | orchestrator | Sunday 01 March 2026 00:52:35 +0000 (0:00:05.232) 0:03:10.412 ********** 2026-03-01 00:53:46.162453 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-01 00:53:46.162457 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-01 00:53:46.162461 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-01 00:53:46.162465 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-01 00:53:46.162469 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-01 00:53:46.162473 | orchestrator | 2026-03-01 00:53:46.162476 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-01 00:53:46.162480 | orchestrator | Sunday 01 March 2026 00:53:18 +0000 (0:00:42.149) 0:03:52.561 ********** 2026-03-01 00:53:46.162487 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162491 | orchestrator | 2026-03-01 00:53:46.162498 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-01 00:53:46.162501 | orchestrator | Sunday 01 March 2026 00:53:19 +0000 (0:00:01.090) 0:03:53.652 ********** 2026-03-01 00:53:46.162505 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162509 | orchestrator | 2026-03-01 00:53:46.162513 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-01 00:53:46.162516 | orchestrator | Sunday 01 March 2026 00:53:20 +0000 (0:00:01.657) 0:03:55.309 ********** 2026-03-01 00:53:46.162520 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-01 00:53:46.162524 | orchestrator | 2026-03-01 00:53:46.162528 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-01 00:53:46.162532 | orchestrator | Sunday 01 March 2026 00:53:21 +0000 (0:00:01.091) 0:03:56.401 ********** 2026-03-01 00:53:46.162535 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162539 | orchestrator | 2026-03-01 00:53:46.162543 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-01 00:53:46.162547 | orchestrator | Sunday 01 March 2026 00:53:21 +0000 (0:00:00.109) 0:03:56.510 ********** 2026-03-01 00:53:46.162551 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-01 00:53:46.162555 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-01 00:53:46.162559 | orchestrator | 2026-03-01 00:53:46.162563 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-01 00:53:46.162567 | orchestrator | Sunday 01 March 2026 00:53:23 +0000 (0:00:01.826) 0:03:58.337 ********** 2026-03-01 00:53:46.162570 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162574 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.162578 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.162582 | orchestrator | 2026-03-01 00:53:46.162586 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-01 00:53:46.162589 | orchestrator | Sunday 01 March 2026 00:53:24 +0000 (0:00:00.414) 0:03:58.751 ********** 2026-03-01 00:53:46.162593 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.162597 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.162601 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.162607 | orchestrator | 2026-03-01 00:53:46.162613 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-01 00:53:46.162619 | orchestrator | 2026-03-01 00:53:46.162626 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-01 00:53:46.162632 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:01.382) 0:04:00.134 ********** 2026-03-01 00:53:46.162639 | orchestrator | ok: [testbed-manager] 2026-03-01 00:53:46.162649 | orchestrator | 2026-03-01 00:53:46.162653 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-01 00:53:46.162656 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.119) 0:04:00.254 ********** 2026-03-01 00:53:46.162663 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-01 00:53:46.162669 | orchestrator | 2026-03-01 00:53:46.162675 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-01 00:53:46.162682 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.204) 0:04:00.458 ********** 2026-03-01 00:53:46.162688 | orchestrator | changed: [testbed-manager] 2026-03-01 00:53:46.162694 | orchestrator | 2026-03-01 00:53:46.162701 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-01 00:53:46.162707 | orchestrator | 2026-03-01 00:53:46.162714 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-01 00:53:46.162720 | orchestrator | Sunday 01 March 2026 00:53:30 +0000 (0:00:04.765) 0:04:05.224 ********** 2026-03-01 00:53:46.162727 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:53:46.162732 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:53:46.162738 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:53:46.162743 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:53:46.162748 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:53:46.162754 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:53:46.162760 | orchestrator | 2026-03-01 00:53:46.162766 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-01 00:53:46.162772 | orchestrator | Sunday 01 March 2026 00:53:31 +0000 (0:00:00.716) 0:04:05.941 ********** 2026-03-01 00:53:46.162777 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-01 00:53:46.162793 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-01 00:53:46.162799 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-01 00:53:46.162805 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-01 00:53:46.162811 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-01 00:53:46.162817 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-01 00:53:46.162824 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-01 00:53:46.162830 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-01 00:53:46.162837 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-01 00:53:46.162843 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-01 00:53:46.162850 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-01 00:53:46.162857 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-01 00:53:46.162869 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-01 00:53:46.162883 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-01 00:53:46.162889 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-01 00:53:46.162896 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-01 00:53:46.162902 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-01 00:53:46.162908 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-01 00:53:46.162912 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-01 00:53:46.162916 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-01 00:53:46.162923 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-01 00:53:46.162927 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-01 00:53:46.162931 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-01 00:53:46.162935 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-01 00:53:46.162938 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-01 00:53:46.162942 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-01 00:53:46.162946 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-01 00:53:46.162950 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-01 00:53:46.162954 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-01 00:53:46.162957 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-01 00:53:46.162961 | orchestrator | 2026-03-01 00:53:46.162965 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-01 00:53:46.162969 | orchestrator | Sunday 01 March 2026 00:53:44 +0000 (0:00:12.628) 0:04:18.569 ********** 2026-03-01 00:53:46.162973 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.162976 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.162980 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.162984 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.162988 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.162992 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.162996 | orchestrator | 2026-03-01 00:53:46.163000 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-01 00:53:46.163003 | orchestrator | Sunday 01 March 2026 00:53:44 +0000 (0:00:00.589) 0:04:19.159 ********** 2026-03-01 00:53:46.163007 | orchestrator | skipping: [testbed-node-3] 2026-03-01 00:53:46.163011 | orchestrator | skipping: [testbed-node-4] 2026-03-01 00:53:46.163015 | orchestrator | skipping: [testbed-node-5] 2026-03-01 00:53:46.163018 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:53:46.163022 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:53:46.163026 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:53:46.163029 | orchestrator | 2026-03-01 00:53:46.163033 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:53:46.163037 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:53:46.163042 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-01 00:53:46.163046 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-01 00:53:46.163050 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-01 00:53:46.163054 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-01 00:53:46.163057 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-01 00:53:46.163061 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-01 00:53:46.163065 | orchestrator | 2026-03-01 00:53:46.163069 | orchestrator | 2026-03-01 00:53:46.163075 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:53:46.163079 | orchestrator | Sunday 01 March 2026 00:53:45 +0000 (0:00:00.390) 0:04:19.550 ********** 2026-03-01 00:53:46.163083 | orchestrator | =============================================================================== 2026-03-01 00:53:46.163086 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.70s 2026-03-01 00:53:46.163092 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.15s 2026-03-01 00:53:46.163098 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.92s 2026-03-01 00:53:46.163111 | orchestrator | Manage labels ---------------------------------------------------------- 12.63s 2026-03-01 00:53:46.163122 | orchestrator | kubectl : Install required packages ------------------------------------ 12.22s 2026-03-01 00:53:46.163128 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.08s 2026-03-01 00:53:46.163134 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.86s 2026-03-01 00:53:46.163140 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.29s 2026-03-01 00:53:46.163146 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.23s 2026-03-01 00:53:46.163153 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.77s 2026-03-01 00:53:46.163160 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.01s 2026-03-01 00:53:46.163164 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.86s 2026-03-01 00:53:46.163168 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.60s 2026-03-01 00:53:46.163171 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.24s 2026-03-01 00:53:46.163175 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.08s 2026-03-01 00:53:46.163179 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.01s 2026-03-01 00:53:46.163183 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.98s 2026-03-01 00:53:46.163186 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.91s 2026-03-01 00:53:46.163190 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.83s 2026-03-01 00:53:46.163194 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.68s 2026-03-01 00:53:46.163198 | orchestrator | 2026-03-01 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:49.188747 | orchestrator | 2026-03-01 00:53:49 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:49.191726 | orchestrator | 2026-03-01 00:53:49 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:49.192540 | orchestrator | 2026-03-01 00:53:49 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:49.193222 | orchestrator | 2026-03-01 00:53:49 | INFO  | Task 562eb2fe-e963-46af-b4e9-9f4e13e7795f is in state STARTED 2026-03-01 00:53:49.196494 | orchestrator | 2026-03-01 00:53:49 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:49.197226 | orchestrator | 2026-03-01 00:53:49 | INFO  | Task 0a7fe716-72be-45ad-8ed4-4cd36d26dee6 is in state STARTED 2026-03-01 00:53:49.197358 | orchestrator | 2026-03-01 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:52.240331 | orchestrator | 2026-03-01 00:53:52 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:52.240399 | orchestrator | 2026-03-01 00:53:52 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:52.240406 | orchestrator | 2026-03-01 00:53:52 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:52.241443 | orchestrator | 2026-03-01 00:53:52 | INFO  | Task 562eb2fe-e963-46af-b4e9-9f4e13e7795f is in state STARTED 2026-03-01 00:53:52.243646 | orchestrator | 2026-03-01 00:53:52 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:52.245561 | orchestrator | 2026-03-01 00:53:52 | INFO  | Task 0a7fe716-72be-45ad-8ed4-4cd36d26dee6 is in state STARTED 2026-03-01 00:53:52.245610 | orchestrator | 2026-03-01 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:55.285179 | orchestrator | 2026-03-01 00:53:55 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:55.285674 | orchestrator | 2026-03-01 00:53:55 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:55.286631 | orchestrator | 2026-03-01 00:53:55 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:55.290163 | orchestrator | 2026-03-01 00:53:55 | INFO  | Task 562eb2fe-e963-46af-b4e9-9f4e13e7795f is in state STARTED 2026-03-01 00:53:55.291409 | orchestrator | 2026-03-01 00:53:55 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:55.291459 | orchestrator | 2026-03-01 00:53:55 | INFO  | Task 0a7fe716-72be-45ad-8ed4-4cd36d26dee6 is in state SUCCESS 2026-03-01 00:53:55.291469 | orchestrator | 2026-03-01 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:53:58.328154 | orchestrator | 2026-03-01 00:53:58 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:53:58.328378 | orchestrator | 2026-03-01 00:53:58 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:53:58.329110 | orchestrator | 2026-03-01 00:53:58 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:53:58.329692 | orchestrator | 2026-03-01 00:53:58 | INFO  | Task 562eb2fe-e963-46af-b4e9-9f4e13e7795f is in state SUCCESS 2026-03-01 00:53:58.330461 | orchestrator | 2026-03-01 00:53:58 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:53:58.330502 | orchestrator | 2026-03-01 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:01.365461 | orchestrator | 2026-03-01 00:54:01 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:01.367593 | orchestrator | 2026-03-01 00:54:01 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:01.370326 | orchestrator | 2026-03-01 00:54:01 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:01.372849 | orchestrator | 2026-03-01 00:54:01 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:01.372896 | orchestrator | 2026-03-01 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:04.406556 | orchestrator | 2026-03-01 00:54:04 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:04.407649 | orchestrator | 2026-03-01 00:54:04 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:04.409861 | orchestrator | 2026-03-01 00:54:04 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:04.411687 | orchestrator | 2026-03-01 00:54:04 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:04.411930 | orchestrator | 2026-03-01 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:07.447364 | orchestrator | 2026-03-01 00:54:07 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:07.447855 | orchestrator | 2026-03-01 00:54:07 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:07.448514 | orchestrator | 2026-03-01 00:54:07 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:07.451009 | orchestrator | 2026-03-01 00:54:07 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:07.451066 | orchestrator | 2026-03-01 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:10.485993 | orchestrator | 2026-03-01 00:54:10 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:10.487346 | orchestrator | 2026-03-01 00:54:10 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:10.488609 | orchestrator | 2026-03-01 00:54:10 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:10.489872 | orchestrator | 2026-03-01 00:54:10 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:10.489980 | orchestrator | 2026-03-01 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:13.524850 | orchestrator | 2026-03-01 00:54:13 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:13.528125 | orchestrator | 2026-03-01 00:54:13 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:13.529748 | orchestrator | 2026-03-01 00:54:13 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:13.531496 | orchestrator | 2026-03-01 00:54:13 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:13.532705 | orchestrator | 2026-03-01 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:16.559009 | orchestrator | 2026-03-01 00:54:16 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:16.561811 | orchestrator | 2026-03-01 00:54:16 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:16.562416 | orchestrator | 2026-03-01 00:54:16 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:16.563301 | orchestrator | 2026-03-01 00:54:16 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:16.563451 | orchestrator | 2026-03-01 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:19.605736 | orchestrator | 2026-03-01 00:54:19 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:19.606595 | orchestrator | 2026-03-01 00:54:19 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:19.607408 | orchestrator | 2026-03-01 00:54:19 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:19.609256 | orchestrator | 2026-03-01 00:54:19 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:19.609394 | orchestrator | 2026-03-01 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:22.649062 | orchestrator | 2026-03-01 00:54:22 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state STARTED 2026-03-01 00:54:22.649740 | orchestrator | 2026-03-01 00:54:22 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:22.652590 | orchestrator | 2026-03-01 00:54:22 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:22.653335 | orchestrator | 2026-03-01 00:54:22 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:22.653413 | orchestrator | 2026-03-01 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:25.688340 | orchestrator | 2026-03-01 00:54:25.688395 | orchestrator | 2026-03-01 00:54:25.688403 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-01 00:54:25.688409 | orchestrator | 2026-03-01 00:54:25.688413 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-01 00:54:25.688418 | orchestrator | Sunday 01 March 2026 00:53:49 +0000 (0:00:00.146) 0:00:00.146 ********** 2026-03-01 00:54:25.688423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-01 00:54:25.688428 | orchestrator | 2026-03-01 00:54:25.688433 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-01 00:54:25.688437 | orchestrator | Sunday 01 March 2026 00:53:50 +0000 (0:00:00.840) 0:00:00.987 ********** 2026-03-01 00:54:25.688442 | orchestrator | changed: [testbed-manager] 2026-03-01 00:54:25.688447 | orchestrator | 2026-03-01 00:54:25.688451 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-01 00:54:25.688456 | orchestrator | Sunday 01 March 2026 00:53:51 +0000 (0:00:01.206) 0:00:02.193 ********** 2026-03-01 00:54:25.688460 | orchestrator | changed: [testbed-manager] 2026-03-01 00:54:25.688465 | orchestrator | 2026-03-01 00:54:25.688469 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:54:25.688474 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:54:25.688480 | orchestrator | 2026-03-01 00:54:25.688484 | orchestrator | 2026-03-01 00:54:25.688489 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:54:25.688493 | orchestrator | Sunday 01 March 2026 00:53:52 +0000 (0:00:00.470) 0:00:02.663 ********** 2026-03-01 00:54:25.688498 | orchestrator | =============================================================================== 2026-03-01 00:54:25.688502 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.21s 2026-03-01 00:54:25.688529 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2026-03-01 00:54:25.688534 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-03-01 00:54:25.688539 | orchestrator | 2026-03-01 00:54:25.688543 | orchestrator | 2026-03-01 00:54:25.688548 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-01 00:54:25.688552 | orchestrator | 2026-03-01 00:54:25.688557 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-01 00:54:25.688562 | orchestrator | Sunday 01 March 2026 00:53:49 +0000 (0:00:00.143) 0:00:00.143 ********** 2026-03-01 00:54:25.688567 | orchestrator | ok: [testbed-manager] 2026-03-01 00:54:25.688575 | orchestrator | 2026-03-01 00:54:25.688581 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-01 00:54:25.688586 | orchestrator | Sunday 01 March 2026 00:53:50 +0000 (0:00:00.531) 0:00:00.675 ********** 2026-03-01 00:54:25.688591 | orchestrator | ok: [testbed-manager] 2026-03-01 00:54:25.688595 | orchestrator | 2026-03-01 00:54:25.688600 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-01 00:54:25.688605 | orchestrator | Sunday 01 March 2026 00:53:50 +0000 (0:00:00.685) 0:00:01.361 ********** 2026-03-01 00:54:25.688609 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-01 00:54:25.688614 | orchestrator | 2026-03-01 00:54:25.688618 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-01 00:54:25.688623 | orchestrator | Sunday 01 March 2026 00:53:51 +0000 (0:00:00.722) 0:00:02.084 ********** 2026-03-01 00:54:25.688628 | orchestrator | changed: [testbed-manager] 2026-03-01 00:54:25.688632 | orchestrator | 2026-03-01 00:54:25.688637 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-01 00:54:25.688641 | orchestrator | Sunday 01 March 2026 00:53:53 +0000 (0:00:01.724) 0:00:03.808 ********** 2026-03-01 00:54:25.688646 | orchestrator | changed: [testbed-manager] 2026-03-01 00:54:25.688651 | orchestrator | 2026-03-01 00:54:25.688668 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-01 00:54:25.688673 | orchestrator | Sunday 01 March 2026 00:53:53 +0000 (0:00:00.595) 0:00:04.403 ********** 2026-03-01 00:54:25.688683 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-01 00:54:25.688688 | orchestrator | 2026-03-01 00:54:25.688692 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-01 00:54:25.688697 | orchestrator | Sunday 01 March 2026 00:53:55 +0000 (0:00:01.886) 0:00:06.289 ********** 2026-03-01 00:54:25.688701 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-01 00:54:25.688706 | orchestrator | 2026-03-01 00:54:25.688710 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-01 00:54:25.688715 | orchestrator | Sunday 01 March 2026 00:53:56 +0000 (0:00:00.907) 0:00:07.197 ********** 2026-03-01 00:54:25.688726 | orchestrator | ok: [testbed-manager] 2026-03-01 00:54:25.688731 | orchestrator | 2026-03-01 00:54:25.688736 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-01 00:54:25.688740 | orchestrator | Sunday 01 March 2026 00:53:57 +0000 (0:00:00.422) 0:00:07.620 ********** 2026-03-01 00:54:25.688745 | orchestrator | ok: [testbed-manager] 2026-03-01 00:54:25.688749 | orchestrator | 2026-03-01 00:54:25.688754 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:54:25.688758 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:54:25.688763 | orchestrator | 2026-03-01 00:54:25.688767 | orchestrator | 2026-03-01 00:54:25.688772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:54:25.688776 | orchestrator | Sunday 01 March 2026 00:53:57 +0000 (0:00:00.322) 0:00:07.943 ********** 2026-03-01 00:54:25.688780 | orchestrator | =============================================================================== 2026-03-01 00:54:25.688785 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.89s 2026-03-01 00:54:25.688789 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.72s 2026-03-01 00:54:25.688794 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.91s 2026-03-01 00:54:25.688809 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-03-01 00:54:25.688813 | orchestrator | Create .kube directory -------------------------------------------------- 0.69s 2026-03-01 00:54:25.688818 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.60s 2026-03-01 00:54:25.688822 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2026-03-01 00:54:25.688827 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2026-03-01 00:54:25.688845 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-03-01 00:54:25.688850 | orchestrator | 2026-03-01 00:54:25.688855 | orchestrator | 2026-03-01 00:54:25.688861 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-01 00:54:25.688869 | orchestrator | 2026-03-01 00:54:25.688874 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-01 00:54:25.688878 | orchestrator | Sunday 01 March 2026 00:52:03 +0000 (0:00:00.076) 0:00:00.076 ********** 2026-03-01 00:54:25.688883 | orchestrator | ok: [localhost] => { 2026-03-01 00:54:25.688888 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-01 00:54:25.688892 | orchestrator | } 2026-03-01 00:54:25.688897 | orchestrator | 2026-03-01 00:54:25.688902 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-01 00:54:25.688906 | orchestrator | Sunday 01 March 2026 00:52:03 +0000 (0:00:00.082) 0:00:00.158 ********** 2026-03-01 00:54:25.688911 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-01 00:54:25.688917 | orchestrator | ...ignoring 2026-03-01 00:54:25.688927 | orchestrator | 2026-03-01 00:54:25.688932 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-01 00:54:25.688936 | orchestrator | Sunday 01 March 2026 00:52:06 +0000 (0:00:03.300) 0:00:03.458 ********** 2026-03-01 00:54:25.688941 | orchestrator | skipping: [localhost] 2026-03-01 00:54:25.688945 | orchestrator | 2026-03-01 00:54:25.688950 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-01 00:54:25.688954 | orchestrator | Sunday 01 March 2026 00:52:06 +0000 (0:00:00.174) 0:00:03.632 ********** 2026-03-01 00:54:25.688959 | orchestrator | ok: [localhost] 2026-03-01 00:54:25.688964 | orchestrator | 2026-03-01 00:54:25.688968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:54:25.688972 | orchestrator | 2026-03-01 00:54:25.688977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 00:54:25.688982 | orchestrator | Sunday 01 March 2026 00:52:07 +0000 (0:00:00.721) 0:00:04.354 ********** 2026-03-01 00:54:25.688986 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:54:25.688991 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:54:25.688995 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:54:25.689000 | orchestrator | 2026-03-01 00:54:25.689004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:54:25.689009 | orchestrator | Sunday 01 March 2026 00:52:08 +0000 (0:00:00.610) 0:00:04.964 ********** 2026-03-01 00:54:25.689013 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-01 00:54:25.689018 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-01 00:54:25.689022 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-01 00:54:25.689027 | orchestrator | 2026-03-01 00:54:25.689032 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-01 00:54:25.689036 | orchestrator | 2026-03-01 00:54:25.689041 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-01 00:54:25.689045 | orchestrator | Sunday 01 March 2026 00:52:08 +0000 (0:00:00.752) 0:00:05.717 ********** 2026-03-01 00:54:25.689050 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:54:25.689054 | orchestrator | 2026-03-01 00:54:25.689059 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-01 00:54:25.689063 | orchestrator | Sunday 01 March 2026 00:52:09 +0000 (0:00:00.939) 0:00:06.656 ********** 2026-03-01 00:54:25.689068 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:54:25.689073 | orchestrator | 2026-03-01 00:54:25.689077 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-01 00:54:25.689082 | orchestrator | Sunday 01 March 2026 00:52:11 +0000 (0:00:01.315) 0:00:07.971 ********** 2026-03-01 00:54:25.689086 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689091 | orchestrator | 2026-03-01 00:54:25.689105 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-01 00:54:25.689110 | orchestrator | Sunday 01 March 2026 00:52:11 +0000 (0:00:00.349) 0:00:08.321 ********** 2026-03-01 00:54:25.689114 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689119 | orchestrator | 2026-03-01 00:54:25.689123 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-01 00:54:25.689128 | orchestrator | Sunday 01 March 2026 00:52:11 +0000 (0:00:00.326) 0:00:08.647 ********** 2026-03-01 00:54:25.689132 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689137 | orchestrator | 2026-03-01 00:54:25.689141 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-01 00:54:25.689146 | orchestrator | Sunday 01 March 2026 00:52:12 +0000 (0:00:00.294) 0:00:08.942 ********** 2026-03-01 00:54:25.689150 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689155 | orchestrator | 2026-03-01 00:54:25.689160 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-01 00:54:25.689164 | orchestrator | Sunday 01 March 2026 00:52:12 +0000 (0:00:00.572) 0:00:09.515 ********** 2026-03-01 00:54:25.689172 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:54:25.689177 | orchestrator | 2026-03-01 00:54:25.689185 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-01 00:54:25.689197 | orchestrator | Sunday 01 March 2026 00:52:13 +0000 (0:00:01.040) 0:00:10.555 ********** 2026-03-01 00:54:25.689215 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:54:25.689224 | orchestrator | 2026-03-01 00:54:25.689242 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-01 00:54:25.689250 | orchestrator | Sunday 01 March 2026 00:52:14 +0000 (0:00:00.989) 0:00:11.545 ********** 2026-03-01 00:54:25.689257 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689265 | orchestrator | 2026-03-01 00:54:25.689272 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-01 00:54:25.689279 | orchestrator | Sunday 01 March 2026 00:52:15 +0000 (0:00:00.339) 0:00:11.884 ********** 2026-03-01 00:54:25.689286 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689293 | orchestrator | 2026-03-01 00:54:25.689300 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-01 00:54:25.689307 | orchestrator | Sunday 01 March 2026 00:52:15 +0000 (0:00:00.390) 0:00:12.274 ********** 2026-03-01 00:54:25.689317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689361 | orchestrator | 2026-03-01 00:54:25.689370 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-01 00:54:25.689378 | orchestrator | Sunday 01 March 2026 00:52:16 +0000 (0:00:01.291) 0:00:13.565 ********** 2026-03-01 00:54:25.689393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689419 | orchestrator | 2026-03-01 00:54:25.689427 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-01 00:54:25.689443 | orchestrator | Sunday 01 March 2026 00:52:20 +0000 (0:00:03.673) 0:00:17.238 ********** 2026-03-01 00:54:25.689451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-01 00:54:25.689459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-01 00:54:25.689467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-01 00:54:25.689475 | orchestrator | 2026-03-01 00:54:25.689483 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-01 00:54:25.689491 | orchestrator | Sunday 01 March 2026 00:52:22 +0000 (0:00:01.921) 0:00:19.160 ********** 2026-03-01 00:54:25.689496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-01 00:54:25.689501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-01 00:54:25.689506 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-01 00:54:25.689510 | orchestrator | 2026-03-01 00:54:25.689515 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-01 00:54:25.689523 | orchestrator | Sunday 01 March 2026 00:52:25 +0000 (0:00:03.144) 0:00:22.304 ********** 2026-03-01 00:54:25.689528 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-01 00:54:25.689532 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-01 00:54:25.689537 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-01 00:54:25.689541 | orchestrator | 2026-03-01 00:54:25.689545 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-01 00:54:25.689550 | orchestrator | Sunday 01 March 2026 00:52:27 +0000 (0:00:01.530) 0:00:23.835 ********** 2026-03-01 00:54:25.689554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-01 00:54:25.689559 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-01 00:54:25.689563 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-01 00:54:25.689568 | orchestrator | 2026-03-01 00:54:25.689572 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-01 00:54:25.689577 | orchestrator | Sunday 01 March 2026 00:52:29 +0000 (0:00:02.723) 0:00:26.559 ********** 2026-03-01 00:54:25.689581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-01 00:54:25.689586 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-01 00:54:25.689590 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-01 00:54:25.689595 | orchestrator | 2026-03-01 00:54:25.689599 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-01 00:54:25.689604 | orchestrator | Sunday 01 March 2026 00:52:32 +0000 (0:00:02.341) 0:00:28.900 ********** 2026-03-01 00:54:25.689608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-01 00:54:25.689613 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-01 00:54:25.689617 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-01 00:54:25.689622 | orchestrator | 2026-03-01 00:54:25.689626 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-01 00:54:25.689631 | orchestrator | Sunday 01 March 2026 00:52:34 +0000 (0:00:01.890) 0:00:30.791 ********** 2026-03-01 00:54:25.689635 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689640 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:54:25.689649 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:54:25.689653 | orchestrator | 2026-03-01 00:54:25.689658 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-01 00:54:25.689663 | orchestrator | Sunday 01 March 2026 00:52:34 +0000 (0:00:00.447) 0:00:31.239 ********** 2026-03-01 00:54:25.689674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:54:25.689710 | orchestrator | 2026-03-01 00:54:25.689717 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-01 00:54:25.689721 | orchestrator | Sunday 01 March 2026 00:52:36 +0000 (0:00:01.505) 0:00:32.745 ********** 2026-03-01 00:54:25.689726 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:54:25.689730 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:54:25.689735 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:54:25.689739 | orchestrator | 2026-03-01 00:54:25.689744 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-01 00:54:25.689753 | orchestrator | Sunday 01 March 2026 00:52:37 +0000 (0:00:01.073) 0:00:33.818 ********** 2026-03-01 00:54:25.689757 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:54:25.689762 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:54:25.689766 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:54:25.689771 | orchestrator | 2026-03-01 00:54:25.689775 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-01 00:54:25.689780 | orchestrator | Sunday 01 March 2026 00:52:44 +0000 (0:00:07.106) 0:00:40.924 ********** 2026-03-01 00:54:25.689784 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:54:25.689789 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:54:25.689793 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:54:25.689798 | orchestrator | 2026-03-01 00:54:25.689802 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-01 00:54:25.689807 | orchestrator | 2026-03-01 00:54:25.689811 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-01 00:54:25.689816 | orchestrator | Sunday 01 March 2026 00:52:44 +0000 (0:00:00.283) 0:00:41.208 ********** 2026-03-01 00:54:25.689820 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:54:25.689825 | orchestrator | 2026-03-01 00:54:25.689830 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-01 00:54:25.689834 | orchestrator | Sunday 01 March 2026 00:52:45 +0000 (0:00:00.587) 0:00:41.795 ********** 2026-03-01 00:54:25.689839 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:54:25.689843 | orchestrator | 2026-03-01 00:54:25.689848 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-01 00:54:25.689852 | orchestrator | Sunday 01 March 2026 00:52:45 +0000 (0:00:00.291) 0:00:42.087 ********** 2026-03-01 00:54:25.689857 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:54:25.689861 | orchestrator | 2026-03-01 00:54:25.689866 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-01 00:54:25.689870 | orchestrator | Sunday 01 March 2026 00:52:47 +0000 (0:00:02.401) 0:00:44.489 ********** 2026-03-01 00:54:25.689875 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:54:25.689879 | orchestrator | 2026-03-01 00:54:25.689884 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-01 00:54:25.689888 | orchestrator | 2026-03-01 00:54:25.689893 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-01 00:54:25.689897 | orchestrator | Sunday 01 March 2026 00:53:41 +0000 (0:00:53.287) 0:01:37.776 ********** 2026-03-01 00:54:25.689904 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:54:25.689909 | orchestrator | 2026-03-01 00:54:25.689914 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-01 00:54:25.689918 | orchestrator | Sunday 01 March 2026 00:53:41 +0000 (0:00:00.706) 0:01:38.483 ********** 2026-03-01 00:54:25.689923 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:54:25.689927 | orchestrator | 2026-03-01 00:54:25.689932 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-01 00:54:25.689936 | orchestrator | Sunday 01 March 2026 00:53:41 +0000 (0:00:00.229) 0:01:38.713 ********** 2026-03-01 00:54:25.689941 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:54:25.689945 | orchestrator | 2026-03-01 00:54:25.689950 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-01 00:54:25.689954 | orchestrator | Sunday 01 March 2026 00:53:43 +0000 (0:00:01.828) 0:01:40.542 ********** 2026-03-01 00:54:25.689959 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:54:25.689963 | orchestrator | 2026-03-01 00:54:25.689968 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-01 00:54:25.689972 | orchestrator | 2026-03-01 00:54:25.689977 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-01 00:54:25.689981 | orchestrator | Sunday 01 March 2026 00:53:57 +0000 (0:00:13.694) 0:01:54.236 ********** 2026-03-01 00:54:25.689986 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:54:25.689993 | orchestrator | 2026-03-01 00:54:25.690007 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-01 00:54:25.690106 | orchestrator | Sunday 01 March 2026 00:53:58 +0000 (0:00:00.789) 0:01:55.026 ********** 2026-03-01 00:54:25.690114 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:54:25.690119 | orchestrator | 2026-03-01 00:54:25.690124 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-01 00:54:25.690128 | orchestrator | Sunday 01 March 2026 00:53:58 +0000 (0:00:00.236) 0:01:55.262 ********** 2026-03-01 00:54:25.690133 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:54:25.690137 | orchestrator | 2026-03-01 00:54:25.690142 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-01 00:54:25.690146 | orchestrator | Sunday 01 March 2026 00:54:00 +0000 (0:00:01.908) 0:01:57.170 ********** 2026-03-01 00:54:25.690151 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:54:25.690155 | orchestrator | 2026-03-01 00:54:25.690160 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-01 00:54:25.690165 | orchestrator | 2026-03-01 00:54:25.690169 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-01 00:54:25.690174 | orchestrator | Sunday 01 March 2026 00:54:17 +0000 (0:00:17.503) 0:02:14.674 ********** 2026-03-01 00:54:25.690178 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:54:25.690183 | orchestrator | 2026-03-01 00:54:25.690187 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-01 00:54:25.690192 | orchestrator | Sunday 01 March 2026 00:54:19 +0000 (0:00:01.333) 0:02:16.007 ********** 2026-03-01 00:54:25.690196 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:54:25.690201 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:54:25.690227 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:54:25.690235 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-01 00:54:25.690242 | orchestrator | enable_outward_rabbitmq_True 2026-03-01 00:54:25.690247 | orchestrator | 2026-03-01 00:54:25.690251 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-01 00:54:25.690256 | orchestrator | skipping: no hosts matched 2026-03-01 00:54:25.690260 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-01 00:54:25.690265 | orchestrator | outward_rabbitmq_restart 2026-03-01 00:54:25.690269 | orchestrator | 2026-03-01 00:54:25.690274 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-01 00:54:25.690278 | orchestrator | skipping: no hosts matched 2026-03-01 00:54:25.690283 | orchestrator | 2026-03-01 00:54:25.690288 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-01 00:54:25.690292 | orchestrator | skipping: no hosts matched 2026-03-01 00:54:25.690296 | orchestrator | 2026-03-01 00:54:25.690301 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:54:25.690306 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-01 00:54:25.690317 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-01 00:54:25.690328 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:54:25.690336 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 00:54:25.690344 | orchestrator | 2026-03-01 00:54:25.690352 | orchestrator | 2026-03-01 00:54:25.690360 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:54:25.690368 | orchestrator | Sunday 01 March 2026 00:54:22 +0000 (0:00:03.345) 0:02:19.353 ********** 2026-03-01 00:54:25.690372 | orchestrator | =============================================================================== 2026-03-01 00:54:25.690385 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.49s 2026-03-01 00:54:25.690390 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.11s 2026-03-01 00:54:25.690394 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.14s 2026-03-01 00:54:25.690399 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.67s 2026-03-01 00:54:25.690403 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.35s 2026-03-01 00:54:25.690408 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.30s 2026-03-01 00:54:25.690412 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.14s 2026-03-01 00:54:25.690417 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.72s 2026-03-01 00:54:25.690460 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.34s 2026-03-01 00:54:25.690471 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.08s 2026-03-01 00:54:25.690476 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.92s 2026-03-01 00:54:25.690480 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.89s 2026-03-01 00:54:25.690485 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.53s 2026-03-01 00:54:25.690489 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.51s 2026-03-01 00:54:25.690494 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.33s 2026-03-01 00:54:25.690499 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.32s 2026-03-01 00:54:25.690503 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.29s 2026-03-01 00:54:25.690512 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.07s 2026-03-01 00:54:25.690517 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.04s 2026-03-01 00:54:25.690522 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2026-03-01 00:54:25.690526 | orchestrator | 2026-03-01 00:54:25 | INFO  | Task c90ee38e-c140-429a-a128-b4ef171ca4b1 is in state SUCCESS 2026-03-01 00:54:25.690531 | orchestrator | 2026-03-01 00:54:25 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:25.690535 | orchestrator | 2026-03-01 00:54:25 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:25.690540 | orchestrator | 2026-03-01 00:54:25 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:25.690545 | orchestrator | 2026-03-01 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:28.724307 | orchestrator | 2026-03-01 00:54:28 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:28.726380 | orchestrator | 2026-03-01 00:54:28 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:28.728326 | orchestrator | 2026-03-01 00:54:28 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:28.728384 | orchestrator | 2026-03-01 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:31.771256 | orchestrator | 2026-03-01 00:54:31 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:31.772769 | orchestrator | 2026-03-01 00:54:31 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:31.774849 | orchestrator | 2026-03-01 00:54:31 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:31.776394 | orchestrator | 2026-03-01 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:34.803877 | orchestrator | 2026-03-01 00:54:34 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:34.805134 | orchestrator | 2026-03-01 00:54:34 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:34.806508 | orchestrator | 2026-03-01 00:54:34 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:34.806556 | orchestrator | 2026-03-01 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:37.845683 | orchestrator | 2026-03-01 00:54:37 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:37.847813 | orchestrator | 2026-03-01 00:54:37 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:37.849595 | orchestrator | 2026-03-01 00:54:37 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:37.849933 | orchestrator | 2026-03-01 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:40.885447 | orchestrator | 2026-03-01 00:54:40 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:40.889601 | orchestrator | 2026-03-01 00:54:40 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:40.891908 | orchestrator | 2026-03-01 00:54:40 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:40.892531 | orchestrator | 2026-03-01 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:43.929679 | orchestrator | 2026-03-01 00:54:43 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:43.930976 | orchestrator | 2026-03-01 00:54:43 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:43.933912 | orchestrator | 2026-03-01 00:54:43 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:43.934266 | orchestrator | 2026-03-01 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:46.972652 | orchestrator | 2026-03-01 00:54:46 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:46.974972 | orchestrator | 2026-03-01 00:54:46 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:46.979598 | orchestrator | 2026-03-01 00:54:46 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:46.979685 | orchestrator | 2026-03-01 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:50.019565 | orchestrator | 2026-03-01 00:54:50 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:50.021365 | orchestrator | 2026-03-01 00:54:50 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:50.022068 | orchestrator | 2026-03-01 00:54:50 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:50.022116 | orchestrator | 2026-03-01 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:53.060278 | orchestrator | 2026-03-01 00:54:53 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:53.061435 | orchestrator | 2026-03-01 00:54:53 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:53.063090 | orchestrator | 2026-03-01 00:54:53 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:53.065397 | orchestrator | 2026-03-01 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:56.107058 | orchestrator | 2026-03-01 00:54:56 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:56.109187 | orchestrator | 2026-03-01 00:54:56 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:56.111045 | orchestrator | 2026-03-01 00:54:56 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:56.111094 | orchestrator | 2026-03-01 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:54:59.164838 | orchestrator | 2026-03-01 00:54:59 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:54:59.165508 | orchestrator | 2026-03-01 00:54:59 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:54:59.166792 | orchestrator | 2026-03-01 00:54:59 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:54:59.166844 | orchestrator | 2026-03-01 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:02.210293 | orchestrator | 2026-03-01 00:55:02 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:55:02.210376 | orchestrator | 2026-03-01 00:55:02 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:02.212309 | orchestrator | 2026-03-01 00:55:02 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:02.212382 | orchestrator | 2026-03-01 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:05.244426 | orchestrator | 2026-03-01 00:55:05 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:55:05.244555 | orchestrator | 2026-03-01 00:55:05 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:05.245236 | orchestrator | 2026-03-01 00:55:05 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:05.245309 | orchestrator | 2026-03-01 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:08.273634 | orchestrator | 2026-03-01 00:55:08 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state STARTED 2026-03-01 00:55:08.273923 | orchestrator | 2026-03-01 00:55:08 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:08.275986 | orchestrator | 2026-03-01 00:55:08 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:08.276090 | orchestrator | 2026-03-01 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:11.303256 | orchestrator | 2026-03-01 00:55:11 | INFO  | Task c7acfc5c-f885-4538-9aee-50df55e92bbc is in state SUCCESS 2026-03-01 00:55:11.304523 | orchestrator | 2026-03-01 00:55:11.304625 | orchestrator | 2026-03-01 00:55:11.304637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:55:11.304646 | orchestrator | 2026-03-01 00:55:11.304653 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 00:55:11.304661 | orchestrator | Sunday 01 March 2026 00:52:50 +0000 (0:00:00.186) 0:00:00.186 ********** 2026-03-01 00:55:11.304669 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:55:11.304678 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:55:11.304685 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:55:11.304692 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.304699 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.304705 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.304712 | orchestrator | 2026-03-01 00:55:11.304719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:55:11.304727 | orchestrator | Sunday 01 March 2026 00:52:51 +0000 (0:00:00.722) 0:00:00.908 ********** 2026-03-01 00:55:11.304734 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-01 00:55:11.304741 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-01 00:55:11.304769 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-01 00:55:11.304777 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-01 00:55:11.304838 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-01 00:55:11.304847 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-01 00:55:11.304853 | orchestrator | 2026-03-01 00:55:11.304860 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-01 00:55:11.304867 | orchestrator | 2026-03-01 00:55:11.304874 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-01 00:55:11.304880 | orchestrator | Sunday 01 March 2026 00:52:52 +0000 (0:00:01.106) 0:00:02.015 ********** 2026-03-01 00:55:11.304888 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:55:11.304897 | orchestrator | 2026-03-01 00:55:11.304904 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-01 00:55:11.304911 | orchestrator | Sunday 01 March 2026 00:52:53 +0000 (0:00:01.107) 0:00:03.123 ********** 2026-03-01 00:55:11.304921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.304931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.304939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.304946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.304953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.304973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.304981 | orchestrator | 2026-03-01 00:55:11.304998 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-01 00:55:11.305005 | orchestrator | Sunday 01 March 2026 00:52:54 +0000 (0:00:01.051) 0:00:04.174 ********** 2026-03-01 00:55:11.305019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305061 | orchestrator | 2026-03-01 00:55:11.305068 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-01 00:55:11.305076 | orchestrator | Sunday 01 March 2026 00:52:56 +0000 (0:00:01.667) 0:00:05.842 ********** 2026-03-01 00:55:11.305083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305144 | orchestrator | 2026-03-01 00:55:11.305150 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-01 00:55:11.305214 | orchestrator | Sunday 01 March 2026 00:52:57 +0000 (0:00:01.644) 0:00:07.486 ********** 2026-03-01 00:55:11.305223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305275 | orchestrator | 2026-03-01 00:55:11.305530 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-01 00:55:11.305538 | orchestrator | Sunday 01 March 2026 00:52:59 +0000 (0:00:01.838) 0:00:09.325 ********** 2026-03-01 00:55:11.305545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.305584 | orchestrator | 2026-03-01 00:55:11.305590 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-01 00:55:11.305597 | orchestrator | Sunday 01 March 2026 00:53:01 +0000 (0:00:01.645) 0:00:10.971 ********** 2026-03-01 00:55:11.305629 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.305637 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:55:11.305643 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:55:11.305649 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:55:11.305661 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.305667 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.305673 | orchestrator | 2026-03-01 00:55:11.305679 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-01 00:55:11.305686 | orchestrator | Sunday 01 March 2026 00:53:04 +0000 (0:00:02.950) 0:00:13.922 ********** 2026-03-01 00:55:11.305692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-01 00:55:11.305699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-01 00:55:11.305706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-01 00:55:11.305713 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-01 00:55:11.305719 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-01 00:55:11.305725 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-01 00:55:11.305732 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-01 00:55:11.305738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-01 00:55:11.305754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-01 00:55:11.305761 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-01 00:55:11.305767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-01 00:55:11.305774 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-01 00:55:11.305780 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-01 00:55:11.305789 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-01 00:55:11.305795 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-01 00:55:11.305802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-01 00:55:11.305808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-01 00:55:11.305815 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-01 00:55:11.305822 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-01 00:55:11.305828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-01 00:55:11.305834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-01 00:55:11.305840 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-01 00:55:11.305845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-01 00:55:11.305852 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-01 00:55:11.305858 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-01 00:55:11.305865 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-01 00:55:11.305871 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-01 00:55:11.305882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-01 00:55:11.305889 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-01 00:55:11.305895 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-01 00:55:11.305901 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-01 00:55:11.305908 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-01 00:55:11.305914 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-01 00:55:11.305920 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-01 00:55:11.305927 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-01 00:55:11.305932 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-01 00:55:11.305938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-01 00:55:11.305945 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-01 00:55:11.305952 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-01 00:55:11.305958 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-01 00:55:11.305964 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-01 00:55:11.305971 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-01 00:55:11.305977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-01 00:55:11.305983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-01 00:55:11.305996 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-01 00:55:11.306004 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-01 00:55:11.306009 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-01 00:55:11.306068 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-01 00:55:11.306076 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-01 00:55:11.306082 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-01 00:55:11.306088 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-01 00:55:11.306094 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-01 00:55:11.306101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-01 00:55:11.306107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-01 00:55:11.306113 | orchestrator | 2026-03-01 00:55:11.306120 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-01 00:55:11.306132 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:20.861) 0:00:34.783 ********** 2026-03-01 00:55:11.306138 | orchestrator | 2026-03-01 00:55:11.306145 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-01 00:55:11.306152 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.062) 0:00:34.846 ********** 2026-03-01 00:55:11.306173 | orchestrator | 2026-03-01 00:55:11.306180 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-01 00:55:11.306187 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.073) 0:00:34.920 ********** 2026-03-01 00:55:11.306193 | orchestrator | 2026-03-01 00:55:11.306199 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-01 00:55:11.306205 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.143) 0:00:35.063 ********** 2026-03-01 00:55:11.306211 | orchestrator | 2026-03-01 00:55:11.306217 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-01 00:55:11.306223 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.131) 0:00:35.194 ********** 2026-03-01 00:55:11.306229 | orchestrator | 2026-03-01 00:55:11.306235 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-01 00:55:11.306242 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.131) 0:00:35.325 ********** 2026-03-01 00:55:11.306250 | orchestrator | 2026-03-01 00:55:11.306254 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-01 00:55:11.306258 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:00.124) 0:00:35.450 ********** 2026-03-01 00:55:11.306263 | orchestrator | ok: [testbed-node-4] 2026-03-01 00:55:11.306268 | orchestrator | ok: [testbed-node-5] 2026-03-01 00:55:11.306272 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.306277 | orchestrator | ok: [testbed-node-3] 2026-03-01 00:55:11.306281 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.306285 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.306290 | orchestrator | 2026-03-01 00:55:11.306294 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-01 00:55:11.306298 | orchestrator | Sunday 01 March 2026 00:53:27 +0000 (0:00:02.136) 0:00:37.587 ********** 2026-03-01 00:55:11.306303 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.306308 | orchestrator | changed: [testbed-node-3] 2026-03-01 00:55:11.306312 | orchestrator | changed: [testbed-node-5] 2026-03-01 00:55:11.306316 | orchestrator | changed: [testbed-node-4] 2026-03-01 00:55:11.306320 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.306325 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.306329 | orchestrator | 2026-03-01 00:55:11.306333 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-01 00:55:11.306338 | orchestrator | 2026-03-01 00:55:11.306342 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-01 00:55:11.306346 | orchestrator | Sunday 01 March 2026 00:53:55 +0000 (0:00:27.665) 0:01:05.252 ********** 2026-03-01 00:55:11.306350 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:55:11.306355 | orchestrator | 2026-03-01 00:55:11.306359 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-01 00:55:11.306364 | orchestrator | Sunday 01 March 2026 00:53:56 +0000 (0:00:01.283) 0:01:06.536 ********** 2026-03-01 00:55:11.306368 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:55:11.306373 | orchestrator | 2026-03-01 00:55:11.306377 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-01 00:55:11.306382 | orchestrator | Sunday 01 March 2026 00:53:57 +0000 (0:00:00.889) 0:01:07.425 ********** 2026-03-01 00:55:11.306386 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.306391 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.306395 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.306399 | orchestrator | 2026-03-01 00:55:11.306404 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-01 00:55:11.306418 | orchestrator | Sunday 01 March 2026 00:53:58 +0000 (0:00:01.201) 0:01:08.627 ********** 2026-03-01 00:55:11.306422 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.306431 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.306436 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.306445 | orchestrator | 2026-03-01 00:55:11.306450 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-01 00:55:11.306455 | orchestrator | Sunday 01 March 2026 00:53:59 +0000 (0:00:00.346) 0:01:08.973 ********** 2026-03-01 00:55:11.306459 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.306463 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.306468 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.306473 | orchestrator | 2026-03-01 00:55:11.306477 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-01 00:55:11.306483 | orchestrator | Sunday 01 March 2026 00:53:59 +0000 (0:00:00.339) 0:01:09.312 ********** 2026-03-01 00:55:11.306490 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.306496 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.306504 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.306513 | orchestrator | 2026-03-01 00:55:11.306521 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-01 00:55:11.306526 | orchestrator | Sunday 01 March 2026 00:53:59 +0000 (0:00:00.307) 0:01:09.620 ********** 2026-03-01 00:55:11.306532 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.306538 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.306544 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.306550 | orchestrator | 2026-03-01 00:55:11.306555 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-01 00:55:11.306560 | orchestrator | Sunday 01 March 2026 00:54:00 +0000 (0:00:00.530) 0:01:10.151 ********** 2026-03-01 00:55:11.306566 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306572 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306578 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306584 | orchestrator | 2026-03-01 00:55:11.306589 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-01 00:55:11.306595 | orchestrator | Sunday 01 March 2026 00:54:00 +0000 (0:00:00.304) 0:01:10.455 ********** 2026-03-01 00:55:11.306663 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306673 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306680 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306688 | orchestrator | 2026-03-01 00:55:11.306695 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-01 00:55:11.306701 | orchestrator | Sunday 01 March 2026 00:54:01 +0000 (0:00:00.296) 0:01:10.752 ********** 2026-03-01 00:55:11.306707 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306713 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306719 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306726 | orchestrator | 2026-03-01 00:55:11.306732 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-01 00:55:11.306738 | orchestrator | Sunday 01 March 2026 00:54:01 +0000 (0:00:00.289) 0:01:11.041 ********** 2026-03-01 00:55:11.306744 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306751 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306758 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306763 | orchestrator | 2026-03-01 00:55:11.306770 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-01 00:55:11.306776 | orchestrator | Sunday 01 March 2026 00:54:01 +0000 (0:00:00.488) 0:01:11.530 ********** 2026-03-01 00:55:11.306782 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306788 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306794 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306799 | orchestrator | 2026-03-01 00:55:11.306805 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-01 00:55:11.306811 | orchestrator | Sunday 01 March 2026 00:54:02 +0000 (0:00:00.300) 0:01:11.830 ********** 2026-03-01 00:55:11.306828 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306835 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306841 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306847 | orchestrator | 2026-03-01 00:55:11.306854 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-01 00:55:11.306860 | orchestrator | Sunday 01 March 2026 00:54:02 +0000 (0:00:00.282) 0:01:12.113 ********** 2026-03-01 00:55:11.306865 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306871 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306878 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306884 | orchestrator | 2026-03-01 00:55:11.306890 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-01 00:55:11.306896 | orchestrator | Sunday 01 March 2026 00:54:02 +0000 (0:00:00.285) 0:01:12.399 ********** 2026-03-01 00:55:11.306904 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306910 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306917 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306923 | orchestrator | 2026-03-01 00:55:11.306930 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-01 00:55:11.306937 | orchestrator | Sunday 01 March 2026 00:54:03 +0000 (0:00:00.475) 0:01:12.874 ********** 2026-03-01 00:55:11.306943 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306952 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306960 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.306969 | orchestrator | 2026-03-01 00:55:11.306975 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-01 00:55:11.306981 | orchestrator | Sunday 01 March 2026 00:54:03 +0000 (0:00:00.301) 0:01:13.176 ********** 2026-03-01 00:55:11.306987 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.306993 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.306999 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307004 | orchestrator | 2026-03-01 00:55:11.307010 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-01 00:55:11.307017 | orchestrator | Sunday 01 March 2026 00:54:03 +0000 (0:00:00.298) 0:01:13.475 ********** 2026-03-01 00:55:11.307023 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307028 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307034 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307040 | orchestrator | 2026-03-01 00:55:11.307046 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-01 00:55:11.307051 | orchestrator | Sunday 01 March 2026 00:54:04 +0000 (0:00:00.310) 0:01:13.785 ********** 2026-03-01 00:55:11.307058 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307074 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307090 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307094 | orchestrator | 2026-03-01 00:55:11.307098 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-01 00:55:11.307102 | orchestrator | Sunday 01 March 2026 00:54:04 +0000 (0:00:00.295) 0:01:14.080 ********** 2026-03-01 00:55:11.307107 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:55:11.307111 | orchestrator | 2026-03-01 00:55:11.307115 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-01 00:55:11.307119 | orchestrator | Sunday 01 March 2026 00:54:05 +0000 (0:00:00.794) 0:01:14.874 ********** 2026-03-01 00:55:11.307123 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.307127 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.307131 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.307135 | orchestrator | 2026-03-01 00:55:11.307138 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-01 00:55:11.307142 | orchestrator | Sunday 01 March 2026 00:54:05 +0000 (0:00:00.502) 0:01:15.377 ********** 2026-03-01 00:55:11.307146 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.307156 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.307218 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.307223 | orchestrator | 2026-03-01 00:55:11.307228 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-01 00:55:11.307236 | orchestrator | Sunday 01 March 2026 00:54:06 +0000 (0:00:00.635) 0:01:16.013 ********** 2026-03-01 00:55:11.307245 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307252 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307257 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307263 | orchestrator | 2026-03-01 00:55:11.307269 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-01 00:55:11.307275 | orchestrator | Sunday 01 March 2026 00:54:07 +0000 (0:00:00.683) 0:01:16.696 ********** 2026-03-01 00:55:11.307281 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307287 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307293 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307300 | orchestrator | 2026-03-01 00:55:11.307307 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-01 00:55:11.307312 | orchestrator | Sunday 01 March 2026 00:54:07 +0000 (0:00:00.339) 0:01:17.035 ********** 2026-03-01 00:55:11.307315 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307319 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307323 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307336 | orchestrator | 2026-03-01 00:55:11.307340 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-01 00:55:11.307343 | orchestrator | Sunday 01 March 2026 00:54:07 +0000 (0:00:00.360) 0:01:17.396 ********** 2026-03-01 00:55:11.307347 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307351 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307355 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307359 | orchestrator | 2026-03-01 00:55:11.307362 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-01 00:55:11.307366 | orchestrator | Sunday 01 March 2026 00:54:08 +0000 (0:00:00.332) 0:01:17.729 ********** 2026-03-01 00:55:11.307370 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307374 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307378 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307383 | orchestrator | 2026-03-01 00:55:11.307389 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-01 00:55:11.307395 | orchestrator | Sunday 01 March 2026 00:54:08 +0000 (0:00:00.540) 0:01:18.270 ********** 2026-03-01 00:55:11.307401 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307407 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307413 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307419 | orchestrator | 2026-03-01 00:55:11.307425 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-01 00:55:11.307431 | orchestrator | Sunday 01 March 2026 00:54:08 +0000 (0:00:00.317) 0:01:18.587 ********** 2026-03-01 00:55:11.307440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307534 | orchestrator | 2026-03-01 00:55:11.307540 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-01 00:55:11.307547 | orchestrator | Sunday 01 March 2026 00:54:10 +0000 (0:00:01.686) 0:01:20.273 ********** 2026-03-01 00:55:11.307553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307614 | orchestrator | 2026-03-01 00:55:11.307618 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-01 00:55:11.307622 | orchestrator | Sunday 01 March 2026 00:54:14 +0000 (0:00:04.355) 0:01:24.629 ********** 2026-03-01 00:55:11.307626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.307675 | orchestrator | 2026-03-01 00:55:11.307682 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-01 00:55:11.307687 | orchestrator | Sunday 01 March 2026 00:54:17 +0000 (0:00:02.821) 0:01:27.451 ********** 2026-03-01 00:55:11.307693 | orchestrator | 2026-03-01 00:55:11.307699 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-01 00:55:11.307709 | orchestrator | Sunday 01 March 2026 00:54:17 +0000 (0:00:00.069) 0:01:27.521 ********** 2026-03-01 00:55:11.307719 | orchestrator | 2026-03-01 00:55:11.307724 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-01 00:55:11.307729 | orchestrator | Sunday 01 March 2026 00:54:17 +0000 (0:00:00.066) 0:01:27.588 ********** 2026-03-01 00:55:11.307736 | orchestrator | 2026-03-01 00:55:11.307741 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-01 00:55:11.307748 | orchestrator | Sunday 01 March 2026 00:54:18 +0000 (0:00:00.089) 0:01:27.677 ********** 2026-03-01 00:55:11.307754 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.307761 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.307767 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.307773 | orchestrator | 2026-03-01 00:55:11.307778 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-01 00:55:11.307785 | orchestrator | Sunday 01 March 2026 00:54:21 +0000 (0:00:03.393) 0:01:31.070 ********** 2026-03-01 00:55:11.307797 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.307803 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.307808 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.307815 | orchestrator | 2026-03-01 00:55:11.307820 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-01 00:55:11.307826 | orchestrator | Sunday 01 March 2026 00:54:24 +0000 (0:00:02.893) 0:01:33.964 ********** 2026-03-01 00:55:11.307832 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.307838 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.307844 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.307850 | orchestrator | 2026-03-01 00:55:11.307857 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-01 00:55:11.307864 | orchestrator | Sunday 01 March 2026 00:54:32 +0000 (0:00:07.853) 0:01:41.818 ********** 2026-03-01 00:55:11.307870 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.307876 | orchestrator | 2026-03-01 00:55:11.307883 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-01 00:55:11.307888 | orchestrator | Sunday 01 March 2026 00:54:32 +0000 (0:00:00.128) 0:01:41.947 ********** 2026-03-01 00:55:11.307892 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.307896 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.307900 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.307904 | orchestrator | 2026-03-01 00:55:11.307907 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-01 00:55:11.307912 | orchestrator | Sunday 01 March 2026 00:54:33 +0000 (0:00:00.877) 0:01:42.824 ********** 2026-03-01 00:55:11.307915 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307919 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307924 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.307927 | orchestrator | 2026-03-01 00:55:11.307931 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-01 00:55:11.307935 | orchestrator | Sunday 01 March 2026 00:54:33 +0000 (0:00:00.605) 0:01:43.429 ********** 2026-03-01 00:55:11.307939 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.307943 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.307947 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.307951 | orchestrator | 2026-03-01 00:55:11.307954 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-01 00:55:11.307958 | orchestrator | Sunday 01 March 2026 00:54:34 +0000 (0:00:00.854) 0:01:44.284 ********** 2026-03-01 00:55:11.307962 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.307966 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.307970 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.307973 | orchestrator | 2026-03-01 00:55:11.307977 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-01 00:55:11.307981 | orchestrator | Sunday 01 March 2026 00:54:35 +0000 (0:00:00.695) 0:01:44.979 ********** 2026-03-01 00:55:11.307990 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.307994 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308003 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308007 | orchestrator | 2026-03-01 00:55:11.308011 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-01 00:55:11.308015 | orchestrator | Sunday 01 March 2026 00:54:36 +0000 (0:00:00.688) 0:01:45.668 ********** 2026-03-01 00:55:11.308019 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308023 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.308027 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308031 | orchestrator | 2026-03-01 00:55:11.308035 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-01 00:55:11.308038 | orchestrator | Sunday 01 March 2026 00:54:36 +0000 (0:00:00.742) 0:01:46.410 ********** 2026-03-01 00:55:11.308042 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308046 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.308050 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308059 | orchestrator | 2026-03-01 00:55:11.308063 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-01 00:55:11.308067 | orchestrator | Sunday 01 March 2026 00:54:37 +0000 (0:00:00.311) 0:01:46.722 ********** 2026-03-01 00:55:11.308071 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308075 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308084 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308093 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308097 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308101 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308108 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308112 | orchestrator | 2026-03-01 00:55:11.308116 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-01 00:55:11.308124 | orchestrator | Sunday 01 March 2026 00:54:38 +0000 (0:00:01.528) 0:01:48.250 ********** 2026-03-01 00:55:11.308128 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308133 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308137 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308141 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308153 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308239 | orchestrator | 2026-03-01 00:55:11.308245 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-01 00:55:11.308266 | orchestrator | Sunday 01 March 2026 00:54:42 +0000 (0:00:04.168) 0:01:52.419 ********** 2026-03-01 00:55:11.308298 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308306 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308312 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308318 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308344 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 00:55:11.308356 | orchestrator | 2026-03-01 00:55:11.308362 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-01 00:55:11.308368 | orchestrator | Sunday 01 March 2026 00:54:45 +0000 (0:00:02.646) 0:01:55.066 ********** 2026-03-01 00:55:11.308380 | orchestrator | 2026-03-01 00:55:11.308387 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-01 00:55:11.308393 | orchestrator | Sunday 01 March 2026 00:54:45 +0000 (0:00:00.098) 0:01:55.164 ********** 2026-03-01 00:55:11.308399 | orchestrator | 2026-03-01 00:55:11.308405 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-01 00:55:11.308412 | orchestrator | Sunday 01 March 2026 00:54:45 +0000 (0:00:00.064) 0:01:55.229 ********** 2026-03-01 00:55:11.308418 | orchestrator | 2026-03-01 00:55:11.308425 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-01 00:55:11.308431 | orchestrator | Sunday 01 March 2026 00:54:45 +0000 (0:00:00.062) 0:01:55.291 ********** 2026-03-01 00:55:11.308437 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.308447 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.308454 | orchestrator | 2026-03-01 00:55:11.308464 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-01 00:55:11.308470 | orchestrator | Sunday 01 March 2026 00:54:51 +0000 (0:00:06.272) 0:02:01.564 ********** 2026-03-01 00:55:11.308475 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.308481 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.308486 | orchestrator | 2026-03-01 00:55:11.308492 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-01 00:55:11.308498 | orchestrator | Sunday 01 March 2026 00:54:58 +0000 (0:00:06.623) 0:02:08.188 ********** 2026-03-01 00:55:11.308504 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:55:11.308509 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:55:11.308515 | orchestrator | 2026-03-01 00:55:11.308520 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-01 00:55:11.308526 | orchestrator | Sunday 01 March 2026 00:55:05 +0000 (0:00:06.621) 0:02:14.809 ********** 2026-03-01 00:55:11.308531 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:55:11.308537 | orchestrator | 2026-03-01 00:55:11.308544 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-01 00:55:11.308549 | orchestrator | Sunday 01 March 2026 00:55:05 +0000 (0:00:00.134) 0:02:14.943 ********** 2026-03-01 00:55:11.308555 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.308560 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308566 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308572 | orchestrator | 2026-03-01 00:55:11.308577 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-01 00:55:11.308583 | orchestrator | Sunday 01 March 2026 00:55:06 +0000 (0:00:00.859) 0:02:15.803 ********** 2026-03-01 00:55:11.308589 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.308594 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.308600 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.308605 | orchestrator | 2026-03-01 00:55:11.308611 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-01 00:55:11.308617 | orchestrator | Sunday 01 March 2026 00:55:06 +0000 (0:00:00.651) 0:02:16.454 ********** 2026-03-01 00:55:11.308623 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308629 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.308634 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308640 | orchestrator | 2026-03-01 00:55:11.308647 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-01 00:55:11.308652 | orchestrator | Sunday 01 March 2026 00:55:07 +0000 (0:00:00.840) 0:02:17.295 ********** 2026-03-01 00:55:11.308659 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:55:11.308665 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:55:11.308670 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:55:11.308676 | orchestrator | 2026-03-01 00:55:11.308682 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-01 00:55:11.308688 | orchestrator | Sunday 01 March 2026 00:55:08 +0000 (0:00:00.632) 0:02:17.927 ********** 2026-03-01 00:55:11.308693 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.308699 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308713 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308719 | orchestrator | 2026-03-01 00:55:11.308725 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-01 00:55:11.308731 | orchestrator | Sunday 01 March 2026 00:55:09 +0000 (0:00:00.801) 0:02:18.729 ********** 2026-03-01 00:55:11.308737 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:55:11.308743 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:55:11.308749 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:55:11.308755 | orchestrator | 2026-03-01 00:55:11.308761 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:55:11.308767 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-01 00:55:11.308775 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-01 00:55:11.308783 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-01 00:55:11.308788 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:55:11.308795 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:55:11.308801 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 00:55:11.308808 | orchestrator | 2026-03-01 00:55:11.308814 | orchestrator | 2026-03-01 00:55:11.308820 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:55:11.308827 | orchestrator | Sunday 01 March 2026 00:55:09 +0000 (0:00:00.827) 0:02:19.556 ********** 2026-03-01 00:55:11.308835 | orchestrator | =============================================================================== 2026-03-01 00:55:11.308841 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.67s 2026-03-01 00:55:11.308845 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.86s 2026-03-01 00:55:11.308849 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.47s 2026-03-01 00:55:11.308853 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.67s 2026-03-01 00:55:11.308857 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.52s 2026-03-01 00:55:11.308861 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.36s 2026-03-01 00:55:11.308881 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.17s 2026-03-01 00:55:11.308892 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.95s 2026-03-01 00:55:11.308896 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.82s 2026-03-01 00:55:11.308900 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.65s 2026-03-01 00:55:11.308904 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.14s 2026-03-01 00:55:11.308908 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.84s 2026-03-01 00:55:11.308911 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2026-03-01 00:55:11.308915 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.67s 2026-03-01 00:55:11.308919 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.65s 2026-03-01 00:55:11.308923 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.64s 2026-03-01 00:55:11.308926 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2026-03-01 00:55:11.308930 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.28s 2026-03-01 00:55:11.308939 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.20s 2026-03-01 00:55:11.308943 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.11s 2026-03-01 00:55:11.308947 | orchestrator | 2026-03-01 00:55:11 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:11.309128 | orchestrator | 2026-03-01 00:55:11 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:11.309143 | orchestrator | 2026-03-01 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:14.362145 | orchestrator | 2026-03-01 00:55:14 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:14.363480 | orchestrator | 2026-03-01 00:55:14 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:14.363529 | orchestrator | 2026-03-01 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:17.410130 | orchestrator | 2026-03-01 00:55:17 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:17.411503 | orchestrator | 2026-03-01 00:55:17 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:17.411539 | orchestrator | 2026-03-01 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:20.459214 | orchestrator | 2026-03-01 00:55:20 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:20.459291 | orchestrator | 2026-03-01 00:55:20 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:20.459298 | orchestrator | 2026-03-01 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:23.500873 | orchestrator | 2026-03-01 00:55:23 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:23.503127 | orchestrator | 2026-03-01 00:55:23 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:23.503352 | orchestrator | 2026-03-01 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:26.543912 | orchestrator | 2026-03-01 00:55:26 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:26.547439 | orchestrator | 2026-03-01 00:55:26 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:26.547522 | orchestrator | 2026-03-01 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:29.603283 | orchestrator | 2026-03-01 00:55:29 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:29.607292 | orchestrator | 2026-03-01 00:55:29 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:29.607365 | orchestrator | 2026-03-01 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:32.654654 | orchestrator | 2026-03-01 00:55:32 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:32.655574 | orchestrator | 2026-03-01 00:55:32 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:32.655701 | orchestrator | 2026-03-01 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:35.709332 | orchestrator | 2026-03-01 00:55:35 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:35.709421 | orchestrator | 2026-03-01 00:55:35 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:35.709428 | orchestrator | 2026-03-01 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:38.751912 | orchestrator | 2026-03-01 00:55:38 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:38.753493 | orchestrator | 2026-03-01 00:55:38 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:38.753616 | orchestrator | 2026-03-01 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:41.796943 | orchestrator | 2026-03-01 00:55:41 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:41.797004 | orchestrator | 2026-03-01 00:55:41 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:41.797013 | orchestrator | 2026-03-01 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:44.839021 | orchestrator | 2026-03-01 00:55:44 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:44.839612 | orchestrator | 2026-03-01 00:55:44 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:44.839635 | orchestrator | 2026-03-01 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:47.871253 | orchestrator | 2026-03-01 00:55:47 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:47.871741 | orchestrator | 2026-03-01 00:55:47 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:47.871793 | orchestrator | 2026-03-01 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:50.900974 | orchestrator | 2026-03-01 00:55:50 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:50.901716 | orchestrator | 2026-03-01 00:55:50 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:50.901750 | orchestrator | 2026-03-01 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:53.934470 | orchestrator | 2026-03-01 00:55:53 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:53.934944 | orchestrator | 2026-03-01 00:55:53 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:53.934979 | orchestrator | 2026-03-01 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:55:56.974799 | orchestrator | 2026-03-01 00:55:56 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:55:56.974849 | orchestrator | 2026-03-01 00:55:56 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:55:56.974854 | orchestrator | 2026-03-01 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:00.013067 | orchestrator | 2026-03-01 00:56:00 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:00.014392 | orchestrator | 2026-03-01 00:56:00 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:00.014427 | orchestrator | 2026-03-01 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:03.056420 | orchestrator | 2026-03-01 00:56:03 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:03.058149 | orchestrator | 2026-03-01 00:56:03 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:03.058255 | orchestrator | 2026-03-01 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:06.106762 | orchestrator | 2026-03-01 00:56:06 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:06.107147 | orchestrator | 2026-03-01 00:56:06 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:06.107185 | orchestrator | 2026-03-01 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:09.152180 | orchestrator | 2026-03-01 00:56:09 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:09.152365 | orchestrator | 2026-03-01 00:56:09 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:09.152377 | orchestrator | 2026-03-01 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:12.193312 | orchestrator | 2026-03-01 00:56:12 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:12.193866 | orchestrator | 2026-03-01 00:56:12 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:12.193889 | orchestrator | 2026-03-01 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:15.230120 | orchestrator | 2026-03-01 00:56:15 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:15.230849 | orchestrator | 2026-03-01 00:56:15 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:15.230885 | orchestrator | 2026-03-01 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:18.283156 | orchestrator | 2026-03-01 00:56:18 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:18.284303 | orchestrator | 2026-03-01 00:56:18 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:18.284347 | orchestrator | 2026-03-01 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:21.334767 | orchestrator | 2026-03-01 00:56:21 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:21.337143 | orchestrator | 2026-03-01 00:56:21 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:21.337197 | orchestrator | 2026-03-01 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:24.381290 | orchestrator | 2026-03-01 00:56:24 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:24.383291 | orchestrator | 2026-03-01 00:56:24 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:24.383638 | orchestrator | 2026-03-01 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:27.437148 | orchestrator | 2026-03-01 00:56:27 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:27.437588 | orchestrator | 2026-03-01 00:56:27 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:27.437607 | orchestrator | 2026-03-01 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:30.485829 | orchestrator | 2026-03-01 00:56:30 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:30.489426 | orchestrator | 2026-03-01 00:56:30 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:30.489483 | orchestrator | 2026-03-01 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:33.523131 | orchestrator | 2026-03-01 00:56:33 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:33.523507 | orchestrator | 2026-03-01 00:56:33 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:33.523532 | orchestrator | 2026-03-01 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:36.571109 | orchestrator | 2026-03-01 00:56:36 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:36.573322 | orchestrator | 2026-03-01 00:56:36 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:36.573383 | orchestrator | 2026-03-01 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:39.613909 | orchestrator | 2026-03-01 00:56:39 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:39.616030 | orchestrator | 2026-03-01 00:56:39 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:39.616135 | orchestrator | 2026-03-01 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:42.654689 | orchestrator | 2026-03-01 00:56:42 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:42.656285 | orchestrator | 2026-03-01 00:56:42 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:42.656409 | orchestrator | 2026-03-01 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:45.695512 | orchestrator | 2026-03-01 00:56:45 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:45.696026 | orchestrator | 2026-03-01 00:56:45 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:45.696161 | orchestrator | 2026-03-01 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:48.746392 | orchestrator | 2026-03-01 00:56:48 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:48.748578 | orchestrator | 2026-03-01 00:56:48 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:48.748637 | orchestrator | 2026-03-01 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:51.792157 | orchestrator | 2026-03-01 00:56:51 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:51.793774 | orchestrator | 2026-03-01 00:56:51 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:51.793814 | orchestrator | 2026-03-01 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:54.839443 | orchestrator | 2026-03-01 00:56:54 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:54.840837 | orchestrator | 2026-03-01 00:56:54 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:54.840938 | orchestrator | 2026-03-01 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:56:57.872724 | orchestrator | 2026-03-01 00:56:57 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:56:57.873773 | orchestrator | 2026-03-01 00:56:57 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:56:57.873834 | orchestrator | 2026-03-01 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:00.907554 | orchestrator | 2026-03-01 00:57:00 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:00.910609 | orchestrator | 2026-03-01 00:57:00 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:00.910660 | orchestrator | 2026-03-01 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:03.966924 | orchestrator | 2026-03-01 00:57:03 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:03.968828 | orchestrator | 2026-03-01 00:57:03 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:03.968869 | orchestrator | 2026-03-01 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:07.020650 | orchestrator | 2026-03-01 00:57:07 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:07.020707 | orchestrator | 2026-03-01 00:57:07 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:07.020712 | orchestrator | 2026-03-01 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:10.063900 | orchestrator | 2026-03-01 00:57:10 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:10.064133 | orchestrator | 2026-03-01 00:57:10 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:10.064152 | orchestrator | 2026-03-01 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:13.118247 | orchestrator | 2026-03-01 00:57:13 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:13.120092 | orchestrator | 2026-03-01 00:57:13 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:13.120136 | orchestrator | 2026-03-01 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:16.160903 | orchestrator | 2026-03-01 00:57:16 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:16.163612 | orchestrator | 2026-03-01 00:57:16 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:16.163664 | orchestrator | 2026-03-01 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:19.202327 | orchestrator | 2026-03-01 00:57:19 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:19.202390 | orchestrator | 2026-03-01 00:57:19 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:19.202401 | orchestrator | 2026-03-01 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:22.238160 | orchestrator | 2026-03-01 00:57:22 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:22.239378 | orchestrator | 2026-03-01 00:57:22 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:22.239418 | orchestrator | 2026-03-01 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:25.301856 | orchestrator | 2026-03-01 00:57:25 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:25.301953 | orchestrator | 2026-03-01 00:57:25 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:25.301963 | orchestrator | 2026-03-01 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:28.333377 | orchestrator | 2026-03-01 00:57:28 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:28.338118 | orchestrator | 2026-03-01 00:57:28 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:28.338823 | orchestrator | 2026-03-01 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:31.377885 | orchestrator | 2026-03-01 00:57:31 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:31.380708 | orchestrator | 2026-03-01 00:57:31 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:31.380763 | orchestrator | 2026-03-01 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:34.423454 | orchestrator | 2026-03-01 00:57:34 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:34.425119 | orchestrator | 2026-03-01 00:57:34 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:34.425166 | orchestrator | 2026-03-01 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:37.470142 | orchestrator | 2026-03-01 00:57:37 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:37.471180 | orchestrator | 2026-03-01 00:57:37 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:37.471414 | orchestrator | 2026-03-01 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:40.528592 | orchestrator | 2026-03-01 00:57:40 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:40.529240 | orchestrator | 2026-03-01 00:57:40 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:40.529276 | orchestrator | 2026-03-01 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:43.584643 | orchestrator | 2026-03-01 00:57:43 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:43.586006 | orchestrator | 2026-03-01 00:57:43 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:43.586281 | orchestrator | 2026-03-01 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:46.639903 | orchestrator | 2026-03-01 00:57:46 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:46.642840 | orchestrator | 2026-03-01 00:57:46 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:46.642903 | orchestrator | 2026-03-01 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:49.687804 | orchestrator | 2026-03-01 00:57:49 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:49.689195 | orchestrator | 2026-03-01 00:57:49 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:49.689251 | orchestrator | 2026-03-01 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:52.728285 | orchestrator | 2026-03-01 00:57:52 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:52.731021 | orchestrator | 2026-03-01 00:57:52 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:52.731090 | orchestrator | 2026-03-01 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:55.779755 | orchestrator | 2026-03-01 00:57:55 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:55.782039 | orchestrator | 2026-03-01 00:57:55 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:55.782088 | orchestrator | 2026-03-01 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:57:58.825000 | orchestrator | 2026-03-01 00:57:58 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:57:58.826382 | orchestrator | 2026-03-01 00:57:58 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:57:58.826453 | orchestrator | 2026-03-01 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:01.874105 | orchestrator | 2026-03-01 00:58:01 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state STARTED 2026-03-01 00:58:01.874467 | orchestrator | 2026-03-01 00:58:01 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:01.874495 | orchestrator | 2026-03-01 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:04.923062 | orchestrator | 2026-03-01 00:58:04 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:04.933807 | orchestrator | 2026-03-01 00:58:04 | INFO  | Task 81033248-a07c-4cdb-8b15-47bc489f0a69 is in state SUCCESS 2026-03-01 00:58:04.934393 | orchestrator | 2026-03-01 00:58:04.935652 | orchestrator | 2026-03-01 00:58:04.935721 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 00:58:04.935729 | orchestrator | 2026-03-01 00:58:04.935734 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 00:58:04.935738 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.247) 0:00:00.247 ********** 2026-03-01 00:58:04.935742 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.935747 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.935751 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.935755 | orchestrator | 2026-03-01 00:58:04.935759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 00:58:04.935763 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.272) 0:00:00.520 ********** 2026-03-01 00:58:04.935767 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-01 00:58:04.935771 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-01 00:58:04.935775 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-01 00:58:04.935779 | orchestrator | 2026-03-01 00:58:04.935783 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-01 00:58:04.935786 | orchestrator | 2026-03-01 00:58:04.935790 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-01 00:58:04.935794 | orchestrator | Sunday 01 March 2026 00:51:47 +0000 (0:00:00.390) 0:00:00.910 ********** 2026-03-01 00:58:04.935798 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.935802 | orchestrator | 2026-03-01 00:58:04.935806 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-01 00:58:04.935810 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:00.576) 0:00:01.487 ********** 2026-03-01 00:58:04.935814 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.935818 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.935822 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.935825 | orchestrator | 2026-03-01 00:58:04.935829 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-01 00:58:04.935833 | orchestrator | Sunday 01 March 2026 00:51:50 +0000 (0:00:01.676) 0:00:03.164 ********** 2026-03-01 00:58:04.935837 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.935841 | orchestrator | 2026-03-01 00:58:04.935845 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-01 00:58:04.935849 | orchestrator | Sunday 01 March 2026 00:51:50 +0000 (0:00:00.656) 0:00:03.820 ********** 2026-03-01 00:58:04.935852 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.935856 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.935860 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.935864 | orchestrator | 2026-03-01 00:58:04.935868 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-01 00:58:04.935871 | orchestrator | Sunday 01 March 2026 00:51:51 +0000 (0:00:00.706) 0:00:04.526 ********** 2026-03-01 00:58:04.935875 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-01 00:58:04.935879 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-01 00:58:04.935883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-01 00:58:04.935887 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-01 00:58:04.935890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-01 00:58:04.935894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-01 00:58:04.935898 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-01 00:58:04.935913 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-01 00:58:04.935917 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-01 00:58:04.935921 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-01 00:58:04.935925 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-01 00:58:04.935928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-01 00:58:04.935932 | orchestrator | 2026-03-01 00:58:04.935936 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-01 00:58:04.935940 | orchestrator | Sunday 01 March 2026 00:51:55 +0000 (0:00:03.652) 0:00:08.179 ********** 2026-03-01 00:58:04.935965 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-01 00:58:04.935970 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-01 00:58:04.935974 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-01 00:58:04.935978 | orchestrator | 2026-03-01 00:58:04.935982 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-01 00:58:04.935986 | orchestrator | Sunday 01 March 2026 00:51:56 +0000 (0:00:01.300) 0:00:09.479 ********** 2026-03-01 00:58:04.935989 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-01 00:58:04.935993 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-01 00:58:04.935997 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-01 00:58:04.936000 | orchestrator | 2026-03-01 00:58:04.936004 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-01 00:58:04.936008 | orchestrator | Sunday 01 March 2026 00:51:58 +0000 (0:00:01.825) 0:00:11.305 ********** 2026-03-01 00:58:04.936012 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-01 00:58:04.936015 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.936029 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-01 00:58:04.936036 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.936040 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-01 00:58:04.936043 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.936047 | orchestrator | 2026-03-01 00:58:04.936051 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-01 00:58:04.936055 | orchestrator | Sunday 01 March 2026 00:51:59 +0000 (0:00:01.174) 0:00:12.480 ********** 2026-03-01 00:58:04.936061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.936101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.936109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.936113 | orchestrator | 2026-03-01 00:58:04.936126 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-01 00:58:04.936236 | orchestrator | Sunday 01 March 2026 00:52:01 +0000 (0:00:02.099) 0:00:14.579 ********** 2026-03-01 00:58:04.936245 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.936250 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.936254 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.936259 | orchestrator | 2026-03-01 00:58:04.936263 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-01 00:58:04.936268 | orchestrator | Sunday 01 March 2026 00:52:03 +0000 (0:00:01.663) 0:00:16.242 ********** 2026-03-01 00:58:04.936272 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-01 00:58:04.936277 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-01 00:58:04.936281 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-01 00:58:04.936286 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-01 00:58:04.936290 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-01 00:58:04.936295 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-01 00:58:04.936300 | orchestrator | 2026-03-01 00:58:04.936304 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-01 00:58:04.936309 | orchestrator | Sunday 01 March 2026 00:52:04 +0000 (0:00:01.697) 0:00:17.940 ********** 2026-03-01 00:58:04.936313 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.936318 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.936322 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.936326 | orchestrator | 2026-03-01 00:58:04.936331 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-01 00:58:04.936335 | orchestrator | Sunday 01 March 2026 00:52:06 +0000 (0:00:01.170) 0:00:19.110 ********** 2026-03-01 00:58:04.936340 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.936344 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.936349 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.936354 | orchestrator | 2026-03-01 00:58:04.936358 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-01 00:58:04.936363 | orchestrator | Sunday 01 March 2026 00:52:08 +0000 (0:00:02.518) 0:00:21.629 ********** 2026-03-01 00:58:04.936368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.936388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.936394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.936404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-01 00:58:04.936409 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.936414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.936419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.936423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.936428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-01 00:58:04.936433 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.936444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.936452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.936457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.936462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-01 00:58:04.936466 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.936471 | orchestrator | 2026-03-01 00:58:04.936475 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-01 00:58:04.936480 | orchestrator | Sunday 01 March 2026 00:52:09 +0000 (0:00:00.815) 0:00:22.445 ********** 2026-03-01 00:58:04.936484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.936531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-01 00:58:04.936535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.936545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-01 00:58:04.936559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.936576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d', '__omit_place_holder__9422a374641ab29bfec90abbcdcbae2b90f15d5d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-01 00:58:04.936583 | orchestrator | 2026-03-01 00:58:04.936592 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-01 00:58:04.936601 | orchestrator | Sunday 01 March 2026 00:52:13 +0000 (0:00:03.533) 0:00:25.979 ********** 2026-03-01 00:58:04.936607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.936661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.936667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.936673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.936680 | orchestrator | 2026-03-01 00:58:04.936686 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-01 00:58:04.936693 | orchestrator | Sunday 01 March 2026 00:52:16 +0000 (0:00:03.879) 0:00:29.858 ********** 2026-03-01 00:58:04.936699 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-01 00:58:04.936706 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-01 00:58:04.936717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-01 00:58:04.936723 | orchestrator | 2026-03-01 00:58:04.936752 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-01 00:58:04.936768 | orchestrator | Sunday 01 March 2026 00:52:20 +0000 (0:00:04.053) 0:00:33.912 ********** 2026-03-01 00:58:04.936772 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-01 00:58:04.936777 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-01 00:58:04.936780 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-01 00:58:04.936784 | orchestrator | 2026-03-01 00:58:04.937070 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-01 00:58:04.937082 | orchestrator | Sunday 01 March 2026 00:52:26 +0000 (0:00:05.201) 0:00:39.114 ********** 2026-03-01 00:58:04.937086 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937090 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937094 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937098 | orchestrator | 2026-03-01 00:58:04.937102 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-01 00:58:04.937106 | orchestrator | Sunday 01 March 2026 00:52:26 +0000 (0:00:00.560) 0:00:39.674 ********** 2026-03-01 00:58:04.937110 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-01 00:58:04.937114 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-01 00:58:04.937118 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-01 00:58:04.937122 | orchestrator | 2026-03-01 00:58:04.937126 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-01 00:58:04.937130 | orchestrator | Sunday 01 March 2026 00:52:30 +0000 (0:00:03.675) 0:00:43.350 ********** 2026-03-01 00:58:04.937136 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-01 00:58:04.937142 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-01 00:58:04.937152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-01 00:58:04.937158 | orchestrator | 2026-03-01 00:58:04.937164 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-01 00:58:04.937169 | orchestrator | Sunday 01 March 2026 00:52:33 +0000 (0:00:02.724) 0:00:46.074 ********** 2026-03-01 00:58:04.937175 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-01 00:58:04.937181 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-01 00:58:04.937186 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-01 00:58:04.937192 | orchestrator | 2026-03-01 00:58:04.937199 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-01 00:58:04.937205 | orchestrator | Sunday 01 March 2026 00:52:34 +0000 (0:00:01.864) 0:00:47.939 ********** 2026-03-01 00:58:04.937210 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-01 00:58:04.937216 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-01 00:58:04.937222 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-01 00:58:04.937262 | orchestrator | 2026-03-01 00:58:04.937268 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-01 00:58:04.937272 | orchestrator | Sunday 01 March 2026 00:52:36 +0000 (0:00:01.762) 0:00:49.701 ********** 2026-03-01 00:58:04.937275 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.937285 | orchestrator | 2026-03-01 00:58:04.937289 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-01 00:58:04.937293 | orchestrator | Sunday 01 March 2026 00:52:38 +0000 (0:00:01.836) 0:00:51.537 ********** 2026-03-01 00:58:04.937297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.937302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.937313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.937317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.937322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.937349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.937357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.937361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.937371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.937376 | orchestrator | 2026-03-01 00:58:04.937380 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-01 00:58:04.937383 | orchestrator | Sunday 01 March 2026 00:52:41 +0000 (0:00:03.207) 0:00:54.744 ********** 2026-03-01 00:58:04.937395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937407 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937426 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937483 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937486 | orchestrator | 2026-03-01 00:58:04.937490 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-01 00:58:04.937494 | orchestrator | Sunday 01 March 2026 00:52:42 +0000 (0:00:00.548) 0:00:55.293 ********** 2026-03-01 00:58:04.937498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937513 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937535 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937554 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937557 | orchestrator | 2026-03-01 00:58:04.937561 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-01 00:58:04.937565 | orchestrator | Sunday 01 March 2026 00:52:43 +0000 (0:00:00.690) 0:00:55.983 ********** 2026-03-01 00:58:04.937569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937588 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937607 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937629 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937632 | orchestrator | 2026-03-01 00:58:04.937636 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-01 00:58:04.937640 | orchestrator | Sunday 01 March 2026 00:52:43 +0000 (0:00:00.742) 0:00:56.725 ********** 2026-03-01 00:58:04.937644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937659 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937675 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937704 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937708 | orchestrator | 2026-03-01 00:58:04.937713 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-01 00:58:04.937717 | orchestrator | Sunday 01 March 2026 00:52:44 +0000 (0:00:00.601) 0:00:57.327 ********** 2026-03-01 00:58:04.937722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937736 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937778 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937806 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937813 | orchestrator | 2026-03-01 00:58:04.937819 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-01 00:58:04.937826 | orchestrator | Sunday 01 March 2026 00:52:45 +0000 (0:00:00.774) 0:00:58.102 ********** 2026-03-01 00:58:04.937833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937885 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.937889 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.937893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937913 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.937917 | orchestrator | 2026-03-01 00:58:04.937921 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-01 00:58:04.937925 | orchestrator | Sunday 01 March 2026 00:52:46 +0000 (0:00:01.137) 0:00:59.240 ********** 2026-03-01 00:58:04.937929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.937933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.937937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.937941 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.938009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.938195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.938213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.938217 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.938221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.938226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.938230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.938233 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.938237 | orchestrator | 2026-03-01 00:58:04.938241 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-01 00:58:04.938245 | orchestrator | Sunday 01 March 2026 00:52:46 +0000 (0:00:00.713) 0:00:59.954 ********** 2026-03-01 00:58:04.938249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.938256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.938260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.938264 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.938275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.938279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.938283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.938287 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.938291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-01 00:58:04.938295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-01 00:58:04.938301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-01 00:58:04.938306 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.938309 | orchestrator | 2026-03-01 00:58:04.938313 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-01 00:58:04.938317 | orchestrator | Sunday 01 March 2026 00:52:48 +0000 (0:00:01.008) 0:01:00.962 ********** 2026-03-01 00:58:04.938321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-01 00:58:04.938326 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-01 00:58:04.938332 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-01 00:58:04.938336 | orchestrator | 2026-03-01 00:58:04.938342 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-01 00:58:04.938346 | orchestrator | Sunday 01 March 2026 00:52:49 +0000 (0:00:01.778) 0:01:02.741 ********** 2026-03-01 00:58:04.938350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-01 00:58:04.938354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-01 00:58:04.938358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-01 00:58:04.938361 | orchestrator | 2026-03-01 00:58:04.938365 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-01 00:58:04.938369 | orchestrator | Sunday 01 March 2026 00:52:51 +0000 (0:00:01.625) 0:01:04.366 ********** 2026-03-01 00:58:04.938373 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-01 00:58:04.938376 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-01 00:58:04.938380 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-01 00:58:04.938384 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-01 00:58:04.938388 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.938392 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-01 00:58:04.938396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-01 00:58:04.938400 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.938404 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.938407 | orchestrator | 2026-03-01 00:58:04.938411 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-01 00:58:04.938415 | orchestrator | Sunday 01 March 2026 00:52:52 +0000 (0:00:00.978) 0:01:05.345 ********** 2026-03-01 00:58:04.938419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.938426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.938430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-01 00:58:04.938440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.938445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.938449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-01 00:58:04.938453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.938462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.938466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-01 00:58:04.938469 | orchestrator | 2026-03-01 00:58:04.938473 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-01 00:58:04.938477 | orchestrator | Sunday 01 March 2026 00:52:55 +0000 (0:00:02.682) 0:01:08.027 ********** 2026-03-01 00:58:04.938481 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.938485 | orchestrator | 2026-03-01 00:58:04.938489 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-01 00:58:04.938492 | orchestrator | Sunday 01 March 2026 00:52:55 +0000 (0:00:00.611) 0:01:08.639 ********** 2026-03-01 00:58:04.938497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-01 00:58:04.938506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.938511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-01 00:58:04.938519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.938530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-01 00:58:04.938551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.938555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938563 | orchestrator | 2026-03-01 00:58:04.938566 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-01 00:58:04.938570 | orchestrator | Sunday 01 March 2026 00:52:59 +0000 (0:00:04.122) 0:01:12.761 ********** 2026-03-01 00:58:04.938574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-01 00:58:04.938584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.938589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938600 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.938603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-01 00:58:04.938608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.938612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-01 00:58:04.938624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938631 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.938636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.938640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.938647 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.938651 | orchestrator | 2026-03-01 00:58:04.938655 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-01 00:58:04.938659 | orchestrator | Sunday 01 March 2026 00:53:00 +0000 (0:00:00.936) 0:01:13.698 ********** 2026-03-01 00:58:04.938664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-01 00:58:04.938668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-01 00:58:04.938672 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.938676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-01 00:58:04.938680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-01 00:58:04.938684 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.938688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-01 00:58:04.938692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-01 00:58:04.938695 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.938699 | orchestrator | 2026-03-01 00:58:04.938707 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-01 00:58:04.938783 | orchestrator | Sunday 01 March 2026 00:53:01 +0000 (0:00:00.773) 0:01:14.472 ********** 2026-03-01 00:58:04.938790 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.938793 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.938797 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.938811 | orchestrator | 2026-03-01 00:58:04.938818 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-01 00:58:04.938836 | orchestrator | Sunday 01 March 2026 00:53:02 +0000 (0:00:01.453) 0:01:15.925 ********** 2026-03-01 00:58:04.938846 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.938878 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.938886 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.938893 | orchestrator | 2026-03-01 00:58:04.938900 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-01 00:58:04.938907 | orchestrator | Sunday 01 March 2026 00:53:05 +0000 (0:00:03.019) 0:01:18.944 ********** 2026-03-01 00:58:04.938914 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.938921 | orchestrator | 2026-03-01 00:58:04.938928 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-01 00:58:04.938936 | orchestrator | Sunday 01 March 2026 00:53:06 +0000 (0:00:00.826) 0:01:19.771 ********** 2026-03-01 00:58:04.939002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.939011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.939017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.939068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939080 | orchestrator | 2026-03-01 00:58:04.939085 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-01 00:58:04.939088 | orchestrator | Sunday 01 March 2026 00:53:09 +0000 (0:00:02.986) 0:01:22.758 ********** 2026-03-01 00:58:04.939098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.939102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939110 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.939114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.939118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939130 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.939143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.939157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.939170 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.939176 | orchestrator | 2026-03-01 00:58:04.939183 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-01 00:58:04.939190 | orchestrator | Sunday 01 March 2026 00:53:10 +0000 (0:00:00.553) 0:01:23.312 ********** 2026-03-01 00:58:04.939197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-01 00:58:04.939205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-01 00:58:04.939212 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.939218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-01 00:58:04.939226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-01 00:58:04.939230 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.939234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-01 00:58:04.939238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-01 00:58:04.939241 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.939245 | orchestrator | 2026-03-01 00:58:04.939249 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-01 00:58:04.939253 | orchestrator | Sunday 01 March 2026 00:53:11 +0000 (0:00:00.903) 0:01:24.215 ********** 2026-03-01 00:58:04.939257 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.939261 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.939264 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.939268 | orchestrator | 2026-03-01 00:58:04.939272 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-01 00:58:04.939275 | orchestrator | Sunday 01 March 2026 00:53:12 +0000 (0:00:01.189) 0:01:25.405 ********** 2026-03-01 00:58:04.939279 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.939283 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.939287 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.939290 | orchestrator | 2026-03-01 00:58:04.939300 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-01 00:58:04.939304 | orchestrator | Sunday 01 March 2026 00:53:14 +0000 (0:00:01.788) 0:01:27.194 ********** 2026-03-01 00:58:04.939308 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.939312 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.939315 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.939319 | orchestrator | 2026-03-01 00:58:04.939323 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-01 00:58:04.939327 | orchestrator | Sunday 01 March 2026 00:53:14 +0000 (0:00:00.264) 0:01:27.458 ********** 2026-03-01 00:58:04.939331 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.939335 | orchestrator | 2026-03-01 00:58:04.939338 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-01 00:58:04.939342 | orchestrator | Sunday 01 March 2026 00:53:15 +0000 (0:00:00.763) 0:01:28.222 ********** 2026-03-01 00:58:04.939346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-01 00:58:04.939351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-01 00:58:04.939358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-01 00:58:04.939362 | orchestrator | 2026-03-01 00:58:04.939366 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-01 00:58:04.939370 | orchestrator | Sunday 01 March 2026 00:53:18 +0000 (0:00:02.866) 0:01:31.088 ********** 2026-03-01 00:58:04.940107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-01 00:58:04.940202 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.940211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-01 00:58:04.940217 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.940223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-01 00:58:04.940236 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.940242 | orchestrator | 2026-03-01 00:58:04.940248 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-01 00:58:04.940254 | orchestrator | Sunday 01 March 2026 00:53:19 +0000 (0:00:01.400) 0:01:32.488 ********** 2026-03-01 00:58:04.940260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-01 00:58:04.940268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-01 00:58:04.940275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-01 00:58:04.940282 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.940288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-01 00:58:04.940316 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.940334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-01 00:58:04.940340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-01 00:58:04.940346 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.940352 | orchestrator | 2026-03-01 00:58:04.940359 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-01 00:58:04.940366 | orchestrator | Sunday 01 March 2026 00:53:21 +0000 (0:00:01.667) 0:01:34.155 ********** 2026-03-01 00:58:04.940372 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.940379 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.940386 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.940391 | orchestrator | 2026-03-01 00:58:04.940397 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-01 00:58:04.940410 | orchestrator | Sunday 01 March 2026 00:53:21 +0000 (0:00:00.622) 0:01:34.778 ********** 2026-03-01 00:58:04.940416 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.940421 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.940427 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.940433 | orchestrator | 2026-03-01 00:58:04.940438 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-01 00:58:04.940445 | orchestrator | Sunday 01 March 2026 00:53:22 +0000 (0:00:01.106) 0:01:35.884 ********** 2026-03-01 00:58:04.940451 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.940457 | orchestrator | 2026-03-01 00:58:04.940463 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-01 00:58:04.940469 | orchestrator | Sunday 01 March 2026 00:53:23 +0000 (0:00:00.668) 0:01:36.553 ********** 2026-03-01 00:58:04.940477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.940484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.940526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.940532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940583 | orchestrator | 2026-03-01 00:58:04.940591 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-01 00:58:04.940597 | orchestrator | Sunday 01 March 2026 00:53:27 +0000 (0:00:03.527) 0:01:40.081 ********** 2026-03-01 00:58:04.940603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.940610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940643 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.940649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.940655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.940662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940706 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.940712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.940719 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.940725 | orchestrator | 2026-03-01 00:58:04.940732 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-01 00:58:04.940739 | orchestrator | Sunday 01 March 2026 00:53:27 +0000 (0:00:00.823) 0:01:40.905 ********** 2026-03-01 00:58:04.940746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-01 00:58:04.940754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-01 00:58:04.940762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-01 00:58:04.940790 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.940803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-01 00:58:04.940809 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.940816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-01 00:58:04.940830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-01 00:58:04.940881 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.940888 | orchestrator | 2026-03-01 00:58:04.940895 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-01 00:58:04.940902 | orchestrator | Sunday 01 March 2026 00:53:29 +0000 (0:00:01.203) 0:01:42.109 ********** 2026-03-01 00:58:04.940909 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.940916 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.940924 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.940931 | orchestrator | 2026-03-01 00:58:04.940937 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-01 00:58:04.940961 | orchestrator | Sunday 01 March 2026 00:53:30 +0000 (0:00:01.486) 0:01:43.595 ********** 2026-03-01 00:58:04.940968 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.940973 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.940979 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.940984 | orchestrator | 2026-03-01 00:58:04.940990 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-01 00:58:04.940996 | orchestrator | Sunday 01 March 2026 00:53:32 +0000 (0:00:02.343) 0:01:45.938 ********** 2026-03-01 00:58:04.941001 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.941007 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.941013 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.941018 | orchestrator | 2026-03-01 00:58:04.941024 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-01 00:58:04.941029 | orchestrator | Sunday 01 March 2026 00:53:33 +0000 (0:00:00.519) 0:01:46.458 ********** 2026-03-01 00:58:04.941035 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.941040 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.941046 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.941052 | orchestrator | 2026-03-01 00:58:04.941057 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-01 00:58:04.941063 | orchestrator | Sunday 01 March 2026 00:53:33 +0000 (0:00:00.310) 0:01:46.769 ********** 2026-03-01 00:58:04.941069 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.941075 | orchestrator | 2026-03-01 00:58:04.941080 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-01 00:58:04.941086 | orchestrator | Sunday 01 March 2026 00:53:34 +0000 (0:00:00.748) 0:01:47.517 ********** 2026-03-01 00:58:04.941092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 00:58:04.941110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 00:58:04.941116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 00:58:04.941166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 00:58:04.941509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 00:58:04.941557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 00:58:04.941568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941665 | orchestrator | 2026-03-01 00:58:04.941671 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-01 00:58:04.941678 | orchestrator | Sunday 01 March 2026 00:53:39 +0000 (0:00:04.570) 0:01:52.087 ********** 2026-03-01 00:58:04.941683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 00:58:04.941697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 00:58:04.941704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941807 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.941823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 00:58:04.941830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 00:58:04.941836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.941989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942103 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.942132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 00:58:04.942140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 00:58:04.942146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.942203 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.942209 | orchestrator | 2026-03-01 00:58:04.942216 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-01 00:58:04.942223 | orchestrator | Sunday 01 March 2026 00:53:40 +0000 (0:00:01.110) 0:01:53.198 ********** 2026-03-01 00:58:04.942231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-01 00:58:04.942238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-01 00:58:04.942245 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.942252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-01 00:58:04.942267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-01 00:58:04.942273 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.942279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-01 00:58:04.942285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-01 00:58:04.942291 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.942298 | orchestrator | 2026-03-01 00:58:04.942302 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-01 00:58:04.942306 | orchestrator | Sunday 01 March 2026 00:53:41 +0000 (0:00:01.309) 0:01:54.507 ********** 2026-03-01 00:58:04.942310 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.942313 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.942317 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.942321 | orchestrator | 2026-03-01 00:58:04.942325 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-01 00:58:04.942328 | orchestrator | Sunday 01 March 2026 00:53:43 +0000 (0:00:01.652) 0:01:56.159 ********** 2026-03-01 00:58:04.942332 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.942336 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.942339 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.942343 | orchestrator | 2026-03-01 00:58:04.942347 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-01 00:58:04.942350 | orchestrator | Sunday 01 March 2026 00:53:45 +0000 (0:00:01.900) 0:01:58.059 ********** 2026-03-01 00:58:04.942354 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.942358 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.942517 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.942523 | orchestrator | 2026-03-01 00:58:04.942528 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-01 00:58:04.942534 | orchestrator | Sunday 01 March 2026 00:53:45 +0000 (0:00:00.278) 0:01:58.338 ********** 2026-03-01 00:58:04.942540 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.942550 | orchestrator | 2026-03-01 00:58:04.942558 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-01 00:58:04.942564 | orchestrator | Sunday 01 March 2026 00:53:46 +0000 (0:00:00.738) 0:01:59.076 ********** 2026-03-01 00:58:04.942595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 00:58:04.942610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.942618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 00:58:04.942641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.942650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 00:58:04.942666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.942675 | orchestrator | 2026-03-01 00:58:04.942679 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-01 00:58:04.942683 | orchestrator | Sunday 01 March 2026 00:53:51 +0000 (0:00:05.844) 0:02:04.920 ********** 2026-03-01 00:58:04.942687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 00:58:04.942716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.942725 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.942745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 00:58:04.942761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.942770 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.942774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 00:58:04.942788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.942798 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.942802 | orchestrator | 2026-03-01 00:58:04.942806 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-01 00:58:04.942811 | orchestrator | Sunday 01 March 2026 00:53:55 +0000 (0:00:03.514) 0:02:08.435 ********** 2026-03-01 00:58:04.942817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-01 00:58:04.942824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-01 00:58:04.942831 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.942841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-01 00:58:04.942849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-01 00:58:04.942856 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.942862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-01 00:58:04.942868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-01 00:58:04.942880 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.942886 | orchestrator | 2026-03-01 00:58:04.942892 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-01 00:58:04.942898 | orchestrator | Sunday 01 March 2026 00:53:59 +0000 (0:00:04.234) 0:02:12.669 ********** 2026-03-01 00:58:04.942904 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.942911 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.942917 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.942925 | orchestrator | 2026-03-01 00:58:04.942935 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-01 00:58:04.942940 | orchestrator | Sunday 01 March 2026 00:54:01 +0000 (0:00:01.333) 0:02:14.003 ********** 2026-03-01 00:58:04.942984 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.942990 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.942996 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.943002 | orchestrator | 2026-03-01 00:58:04.943008 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-01 00:58:04.943206 | orchestrator | Sunday 01 March 2026 00:54:03 +0000 (0:00:02.081) 0:02:16.084 ********** 2026-03-01 00:58:04.943222 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.943226 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.943230 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.943234 | orchestrator | 2026-03-01 00:58:04.943238 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-01 00:58:04.943241 | orchestrator | Sunday 01 March 2026 00:54:03 +0000 (0:00:00.488) 0:02:16.572 ********** 2026-03-01 00:58:04.943245 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.943249 | orchestrator | 2026-03-01 00:58:04.943253 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-01 00:58:04.943257 | orchestrator | Sunday 01 March 2026 00:54:04 +0000 (0:00:00.836) 0:02:17.409 ********** 2026-03-01 00:58:04.943261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 00:58:04.943267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 00:58:04.943271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 00:58:04.943286 | orchestrator | 2026-03-01 00:58:04.943295 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-01 00:58:04.943303 | orchestrator | Sunday 01 March 2026 00:54:07 +0000 (0:00:03.378) 0:02:20.787 ********** 2026-03-01 00:58:04.943309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 00:58:04.943316 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.943344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 00:58:04.943351 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.943358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 00:58:04.943364 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.943369 | orchestrator | 2026-03-01 00:58:04.943375 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-01 00:58:04.943381 | orchestrator | Sunday 01 March 2026 00:54:08 +0000 (0:00:00.614) 0:02:21.402 ********** 2026-03-01 00:58:04.943388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-01 00:58:04.943395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-01 00:58:04.943401 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.943408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-01 00:58:04.943414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-01 00:58:04.943420 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.943427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-01 00:58:04.943436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-01 00:58:04.943440 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.943443 | orchestrator | 2026-03-01 00:58:04.943447 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-01 00:58:04.943451 | orchestrator | Sunday 01 March 2026 00:54:09 +0000 (0:00:00.657) 0:02:22.059 ********** 2026-03-01 00:58:04.943455 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.943484 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.943488 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.943492 | orchestrator | 2026-03-01 00:58:04.943496 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-01 00:58:04.943500 | orchestrator | Sunday 01 March 2026 00:54:10 +0000 (0:00:01.503) 0:02:23.562 ********** 2026-03-01 00:58:04.943503 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.943507 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.943511 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.943515 | orchestrator | 2026-03-01 00:58:04.943519 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-01 00:58:04.943525 | orchestrator | Sunday 01 March 2026 00:54:12 +0000 (0:00:02.174) 0:02:25.737 ********** 2026-03-01 00:58:04.943577 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.943586 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.943596 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.943603 | orchestrator | 2026-03-01 00:58:04.943609 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-01 00:58:04.943615 | orchestrator | Sunday 01 March 2026 00:54:13 +0000 (0:00:00.547) 0:02:26.285 ********** 2026-03-01 00:58:04.943622 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.943629 | orchestrator | 2026-03-01 00:58:04.943635 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-01 00:58:04.943642 | orchestrator | Sunday 01 March 2026 00:54:14 +0000 (0:00:00.921) 0:02:27.206 ********** 2026-03-01 00:58:04.943672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 00:58:04.943685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 00:58:04.943846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 00:58:04.943860 | orchestrator | 2026-03-01 00:58:04.943864 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-01 00:58:04.943868 | orchestrator | Sunday 01 March 2026 00:54:18 +0000 (0:00:03.813) 0:02:31.021 ********** 2026-03-01 00:58:04.943913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 00:58:04.943923 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.943927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 00:58:04.943935 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.943974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 00:58:04.943981 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.943987 | orchestrator | 2026-03-01 00:58:04.943993 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-01 00:58:04.944000 | orchestrator | Sunday 01 March 2026 00:54:19 +0000 (0:00:01.777) 0:02:32.798 ********** 2026-03-01 00:58:04.944007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-01 00:58:04.944019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-01 00:58:04.944027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-01 00:58:04.944031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-01 00:58:04.944036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-01 00:58:04.944040 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-01 00:58:04.944048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-01 00:58:04.944052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-01 00:58:04.944056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-01 00:58:04.944060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-01 00:58:04.944064 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-01 00:58:04.944085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-01 00:58:04.944089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-01 00:58:04.944097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-01 00:58:04.944101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-01 00:58:04.944105 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944109 | orchestrator | 2026-03-01 00:58:04.944113 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-01 00:58:04.944117 | orchestrator | Sunday 01 March 2026 00:54:20 +0000 (0:00:01.083) 0:02:33.882 ********** 2026-03-01 00:58:04.944120 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.944124 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.944128 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.944132 | orchestrator | 2026-03-01 00:58:04.944135 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-01 00:58:04.944139 | orchestrator | Sunday 01 March 2026 00:54:22 +0000 (0:00:01.551) 0:02:35.434 ********** 2026-03-01 00:58:04.944143 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.944147 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.944150 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.944154 | orchestrator | 2026-03-01 00:58:04.944158 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-01 00:58:04.944162 | orchestrator | Sunday 01 March 2026 00:54:25 +0000 (0:00:02.602) 0:02:38.036 ********** 2026-03-01 00:58:04.944165 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944169 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944173 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944179 | orchestrator | 2026-03-01 00:58:04.944186 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-01 00:58:04.944192 | orchestrator | Sunday 01 March 2026 00:54:25 +0000 (0:00:00.349) 0:02:38.385 ********** 2026-03-01 00:58:04.944198 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944205 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944211 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944222 | orchestrator | 2026-03-01 00:58:04.944228 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-01 00:58:04.944235 | orchestrator | Sunday 01 March 2026 00:54:25 +0000 (0:00:00.564) 0:02:38.950 ********** 2026-03-01 00:58:04.944242 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.944249 | orchestrator | 2026-03-01 00:58:04.944255 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-01 00:58:04.944262 | orchestrator | Sunday 01 March 2026 00:54:26 +0000 (0:00:00.939) 0:02:39.889 ********** 2026-03-01 00:58:04.944269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 00:58:04.944298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 00:58:04.944304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 00:58:04.944308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 00:58:04.944312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 00:58:04.944316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 00:58:04.944320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 00:58:04.944349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 00:58:04.944359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 00:58:04.944365 | orchestrator | 2026-03-01 00:58:04.944371 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-01 00:58:04.944378 | orchestrator | Sunday 01 March 2026 00:54:30 +0000 (0:00:03.798) 0:02:43.688 ********** 2026-03-01 00:58:04.944385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 00:58:04.944391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 00:58:04.944397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 00:58:04.944430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 00:58:04.944437 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 00:58:04.944449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 00:58:04.944540 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 00:58:04.944555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 00:58:04.944567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 00:58:04.944573 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944578 | orchestrator | 2026-03-01 00:58:04.944585 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-01 00:58:04.944608 | orchestrator | Sunday 01 March 2026 00:54:31 +0000 (0:00:00.560) 0:02:44.249 ********** 2026-03-01 00:58:04.944616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-01 00:58:04.944625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-01 00:58:04.944630 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-01 00:58:04.944642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-01 00:58:04.944649 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-01 00:58:04.944661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-01 00:58:04.944667 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944674 | orchestrator | 2026-03-01 00:58:04.944680 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-01 00:58:04.944687 | orchestrator | Sunday 01 March 2026 00:54:32 +0000 (0:00:00.756) 0:02:45.006 ********** 2026-03-01 00:58:04.944693 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.944698 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.944705 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.944711 | orchestrator | 2026-03-01 00:58:04.944717 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-01 00:58:04.944729 | orchestrator | Sunday 01 March 2026 00:54:33 +0000 (0:00:01.467) 0:02:46.474 ********** 2026-03-01 00:58:04.944736 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.944742 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.944749 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.944754 | orchestrator | 2026-03-01 00:58:04.944759 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-01 00:58:04.944763 | orchestrator | Sunday 01 March 2026 00:54:35 +0000 (0:00:02.126) 0:02:48.601 ********** 2026-03-01 00:58:04.944768 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944772 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944777 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944781 | orchestrator | 2026-03-01 00:58:04.944785 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-01 00:58:04.944790 | orchestrator | Sunday 01 March 2026 00:54:36 +0000 (0:00:00.444) 0:02:49.045 ********** 2026-03-01 00:58:04.944795 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.944799 | orchestrator | 2026-03-01 00:58:04.944803 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-01 00:58:04.944808 | orchestrator | Sunday 01 March 2026 00:54:37 +0000 (0:00:01.064) 0:02:50.110 ********** 2026-03-01 00:58:04.944813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 00:58:04.944836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.944842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 00:58:04.944847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.944859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 00:58:04.944863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.944868 | orchestrator | 2026-03-01 00:58:04.944873 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-01 00:58:04.944878 | orchestrator | Sunday 01 March 2026 00:54:41 +0000 (0:00:03.939) 0:02:54.050 ********** 2026-03-01 00:58:04.944895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 00:58:04.944899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.944907 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 00:58:04.944915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.944919 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.944934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 00:58:04.944939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.944960 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.944966 | orchestrator | 2026-03-01 00:58:04.944970 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-01 00:58:04.944973 | orchestrator | Sunday 01 March 2026 00:54:42 +0000 (0:00:00.947) 0:02:54.997 ********** 2026-03-01 00:58:04.944978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-01 00:58:04.944986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-01 00:58:04.944990 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.944994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-01 00:58:04.944998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-01 00:58:04.945002 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.945005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-01 00:58:04.945009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-01 00:58:04.945013 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.945017 | orchestrator | 2026-03-01 00:58:04.945020 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-01 00:58:04.945024 | orchestrator | Sunday 01 March 2026 00:54:42 +0000 (0:00:00.894) 0:02:55.892 ********** 2026-03-01 00:58:04.945028 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.945032 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.945036 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.945039 | orchestrator | 2026-03-01 00:58:04.945043 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-01 00:58:04.945047 | orchestrator | Sunday 01 March 2026 00:54:44 +0000 (0:00:01.266) 0:02:57.158 ********** 2026-03-01 00:58:04.945050 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.945054 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.945058 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.945062 | orchestrator | 2026-03-01 00:58:04.945065 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-01 00:58:04.945069 | orchestrator | Sunday 01 March 2026 00:54:46 +0000 (0:00:02.276) 0:02:59.435 ********** 2026-03-01 00:58:04.945073 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.945077 | orchestrator | 2026-03-01 00:58:04.945080 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-01 00:58:04.945084 | orchestrator | Sunday 01 March 2026 00:54:47 +0000 (0:00:01.409) 0:03:00.844 ********** 2026-03-01 00:58:04.945088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-01 00:58:04.945137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-01 00:58:04.945382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-01 00:58:04.945446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945458 | orchestrator | 2026-03-01 00:58:04.945464 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-01 00:58:04.945471 | orchestrator | Sunday 01 March 2026 00:54:51 +0000 (0:00:03.741) 0:03:04.585 ********** 2026-03-01 00:58:04.945495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-01 00:58:04.945512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945531 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.945537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-01 00:58:04.945544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945579 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.945583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-01 00:58:04.945587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.945598 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.945602 | orchestrator | 2026-03-01 00:58:04.945606 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-01 00:58:04.945610 | orchestrator | Sunday 01 March 2026 00:54:52 +0000 (0:00:00.785) 0:03:05.371 ********** 2026-03-01 00:58:04.945618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-01 00:58:04.945622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-01 00:58:04.945626 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.945630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-01 00:58:04.945647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-01 00:58:04.945652 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.945655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-01 00:58:04.945659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-01 00:58:04.945663 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.945667 | orchestrator | 2026-03-01 00:58:04.945670 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-01 00:58:04.945675 | orchestrator | Sunday 01 March 2026 00:54:53 +0000 (0:00:01.230) 0:03:06.601 ********** 2026-03-01 00:58:04.945682 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.945688 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.945694 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.945700 | orchestrator | 2026-03-01 00:58:04.945706 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-01 00:58:04.945711 | orchestrator | Sunday 01 March 2026 00:54:55 +0000 (0:00:01.522) 0:03:08.123 ********** 2026-03-01 00:58:04.945717 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.945723 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.945728 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.945734 | orchestrator | 2026-03-01 00:58:04.945739 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-01 00:58:04.945744 | orchestrator | Sunday 01 March 2026 00:54:57 +0000 (0:00:02.086) 0:03:10.210 ********** 2026-03-01 00:58:04.945750 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.945755 | orchestrator | 2026-03-01 00:58:04.945760 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-01 00:58:04.945766 | orchestrator | Sunday 01 March 2026 00:54:58 +0000 (0:00:01.294) 0:03:11.505 ********** 2026-03-01 00:58:04.945772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-01 00:58:04.945777 | orchestrator | 2026-03-01 00:58:04.945783 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-01 00:58:04.945788 | orchestrator | Sunday 01 March 2026 00:55:01 +0000 (0:00:03.225) 0:03:14.731 ********** 2026-03-01 00:58:04.945796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 00:58:04.945834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-01 00:58:04.945842 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.945848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 00:58:04.945855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-01 00:58:04.945866 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.945892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 00:58:04.945901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-01 00:58:04.945908 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.945914 | orchestrator | 2026-03-01 00:58:04.945921 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-01 00:58:04.945928 | orchestrator | Sunday 01 March 2026 00:55:03 +0000 (0:00:01.948) 0:03:16.679 ********** 2026-03-01 00:58:04.945935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 00:58:04.946340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-01 00:58:04.946393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 00:58:04.946402 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.946433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-01 00:58:04.946449 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.946456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 00:58:04.946532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-01 00:58:04.946565 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.946570 | orchestrator | 2026-03-01 00:58:04.946574 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-01 00:58:04.946578 | orchestrator | Sunday 01 March 2026 00:55:05 +0000 (0:00:02.092) 0:03:18.771 ********** 2026-03-01 00:58:04.946582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-01 00:58:04.946587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-01 00:58:04.946596 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.946600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-01 00:58:04.946604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-01 00:58:04.946607 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.946611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-01 00:58:04.946631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-01 00:58:04.946636 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.946640 | orchestrator | 2026-03-01 00:58:04.946644 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-01 00:58:04.946648 | orchestrator | Sunday 01 March 2026 00:55:08 +0000 (0:00:02.400) 0:03:21.171 ********** 2026-03-01 00:58:04.946652 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.946656 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.946659 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.946708 | orchestrator | 2026-03-01 00:58:04.946714 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-01 00:58:04.946717 | orchestrator | Sunday 01 March 2026 00:55:09 +0000 (0:00:01.714) 0:03:22.886 ********** 2026-03-01 00:58:04.946721 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.946725 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.946729 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.946732 | orchestrator | 2026-03-01 00:58:04.946736 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-01 00:58:04.946740 | orchestrator | Sunday 01 March 2026 00:55:11 +0000 (0:00:01.223) 0:03:24.109 ********** 2026-03-01 00:58:04.946744 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.946748 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.947014 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.947023 | orchestrator | 2026-03-01 00:58:04.947031 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-01 00:58:04.947035 | orchestrator | Sunday 01 March 2026 00:55:11 +0000 (0:00:00.265) 0:03:24.375 ********** 2026-03-01 00:58:04.947039 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.947042 | orchestrator | 2026-03-01 00:58:04.947046 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-01 00:58:04.947050 | orchestrator | Sunday 01 March 2026 00:55:12 +0000 (0:00:01.437) 0:03:25.813 ********** 2026-03-01 00:58:04.947054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-01 00:58:04.947060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-01 00:58:04.947064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-01 00:58:04.947068 | orchestrator | 2026-03-01 00:58:04.947071 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-01 00:58:04.947075 | orchestrator | Sunday 01 March 2026 00:55:14 +0000 (0:00:01.715) 0:03:27.528 ********** 2026-03-01 00:58:04.947097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-01 00:58:04.947102 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.947106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-01 00:58:04.947113 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.947117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-01 00:58:04.947121 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.947124 | orchestrator | 2026-03-01 00:58:04.947128 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-01 00:58:04.947132 | orchestrator | Sunday 01 March 2026 00:55:14 +0000 (0:00:00.357) 0:03:27.885 ********** 2026-03-01 00:58:04.947139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-01 00:58:04.947148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-01 00:58:04.947154 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.947163 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.947171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-01 00:58:04.947179 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.947185 | orchestrator | 2026-03-01 00:58:04.947191 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-01 00:58:04.947198 | orchestrator | Sunday 01 March 2026 00:55:15 +0000 (0:00:00.698) 0:03:28.584 ********** 2026-03-01 00:58:04.947204 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.947211 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.947264 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.947271 | orchestrator | 2026-03-01 00:58:04.947276 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-01 00:58:04.947282 | orchestrator | Sunday 01 March 2026 00:55:16 +0000 (0:00:00.412) 0:03:28.996 ********** 2026-03-01 00:58:04.947288 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.947536 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.947540 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.947544 | orchestrator | 2026-03-01 00:58:04.947548 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-01 00:58:04.947552 | orchestrator | Sunday 01 March 2026 00:55:17 +0000 (0:00:01.077) 0:03:30.074 ********** 2026-03-01 00:58:04.947589 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.947594 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.947599 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.947603 | orchestrator | 2026-03-01 00:58:04.947606 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-01 00:58:04.947646 | orchestrator | Sunday 01 March 2026 00:55:17 +0000 (0:00:00.307) 0:03:30.382 ********** 2026-03-01 00:58:04.947652 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.947656 | orchestrator | 2026-03-01 00:58:04.947660 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-01 00:58:04.947664 | orchestrator | Sunday 01 March 2026 00:55:18 +0000 (0:00:01.252) 0:03:31.635 ********** 2026-03-01 00:58:04.947668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 00:58:04.947674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-01 00:58:04.947729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 00:58:04.947737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-01 00:58:04.947787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.947875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.947879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.947923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.947932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.947976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.947984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.947991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.947996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 00:58:04.948000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-01 00:58:04.948035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.948110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948115 | orchestrator | 2026-03-01 00:58:04.948119 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-01 00:58:04.948297 | orchestrator | Sunday 01 March 2026 00:55:22 +0000 (0:00:03.956) 0:03:35.591 ********** 2026-03-01 00:58:04.948302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 00:58:04.948306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-01 00:58:04.948363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 00:58:04.948368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-01 00:58:04.948429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 00:58:04.948482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-01 00:58:04.948612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.948620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948676 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.948680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.948696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948722 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.948726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-01 00:58:04.948742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.948747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-01 00:58:04.948764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-01 00:58:04.948779 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.948786 | orchestrator | 2026-03-01 00:58:04.948793 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-01 00:58:04.948799 | orchestrator | Sunday 01 March 2026 00:55:24 +0000 (0:00:01.424) 0:03:37.016 ********** 2026-03-01 00:58:04.948809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-01 00:58:04.948825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-01 00:58:04.948831 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.948838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-01 00:58:04.948853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-01 00:58:04.948860 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.948866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-01 00:58:04.948873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-01 00:58:04.948879 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.948885 | orchestrator | 2026-03-01 00:58:04.948891 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-01 00:58:04.948897 | orchestrator | Sunday 01 March 2026 00:55:26 +0000 (0:00:01.967) 0:03:38.983 ********** 2026-03-01 00:58:04.948903 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.948910 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.948917 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.948922 | orchestrator | 2026-03-01 00:58:04.948929 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-01 00:58:04.948936 | orchestrator | Sunday 01 March 2026 00:55:27 +0000 (0:00:01.630) 0:03:40.614 ********** 2026-03-01 00:58:04.948942 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.948985 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.948991 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.948997 | orchestrator | 2026-03-01 00:58:04.949003 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-01 00:58:04.949009 | orchestrator | Sunday 01 March 2026 00:55:29 +0000 (0:00:02.326) 0:03:42.941 ********** 2026-03-01 00:58:04.949015 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.949021 | orchestrator | 2026-03-01 00:58:04.949028 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-01 00:58:04.949034 | orchestrator | Sunday 01 March 2026 00:55:31 +0000 (0:00:01.185) 0:03:44.127 ********** 2026-03-01 00:58:04.949041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.949086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.949101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.949108 | orchestrator | 2026-03-01 00:58:04.949115 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-01 00:58:04.949121 | orchestrator | Sunday 01 March 2026 00:55:34 +0000 (0:00:03.806) 0:03:47.934 ********** 2026-03-01 00:58:04.949128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.949134 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.949141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.949147 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.949169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.949177 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.949181 | orchestrator | 2026-03-01 00:58:04.949184 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-01 00:58:04.949188 | orchestrator | Sunday 01 March 2026 00:55:35 +0000 (0:00:00.537) 0:03:48.471 ********** 2026-03-01 00:58:04.949192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949201 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.949204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949212 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.949216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949224 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.949227 | orchestrator | 2026-03-01 00:58:04.949231 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-01 00:58:04.949235 | orchestrator | Sunday 01 March 2026 00:55:36 +0000 (0:00:00.799) 0:03:49.270 ********** 2026-03-01 00:58:04.949239 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.949242 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.949246 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.949250 | orchestrator | 2026-03-01 00:58:04.949253 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-01 00:58:04.949257 | orchestrator | Sunday 01 March 2026 00:55:38 +0000 (0:00:01.874) 0:03:51.145 ********** 2026-03-01 00:58:04.949261 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.949265 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.949268 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.949272 | orchestrator | 2026-03-01 00:58:04.949276 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-01 00:58:04.949280 | orchestrator | Sunday 01 March 2026 00:55:39 +0000 (0:00:01.761) 0:03:52.907 ********** 2026-03-01 00:58:04.949283 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.949287 | orchestrator | 2026-03-01 00:58:04.949291 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-01 00:58:04.949298 | orchestrator | Sunday 01 March 2026 00:55:41 +0000 (0:00:01.398) 0:03:54.305 ********** 2026-03-01 00:58:04.949303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.949324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.949335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.949374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949387 | orchestrator | 2026-03-01 00:58:04.949396 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-01 00:58:04.949406 | orchestrator | Sunday 01 March 2026 00:55:45 +0000 (0:00:03.901) 0:03:58.207 ********** 2026-03-01 00:58:04.949414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.949426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949462 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.949469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.949477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949494 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.949502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.949531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.949541 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.949545 | orchestrator | 2026-03-01 00:58:04.949550 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-01 00:58:04.949554 | orchestrator | Sunday 01 March 2026 00:55:46 +0000 (0:00:01.017) 0:03:59.224 ********** 2026-03-01 00:58:04.949559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949582 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.949587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949605 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.949610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-01 00:58:04.949625 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.949629 | orchestrator | 2026-03-01 00:58:04.949644 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-01 00:58:04.949649 | orchestrator | Sunday 01 March 2026 00:55:47 +0000 (0:00:00.804) 0:04:00.029 ********** 2026-03-01 00:58:04.949656 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.949663 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.949673 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.949681 | orchestrator | 2026-03-01 00:58:04.949687 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-01 00:58:04.949693 | orchestrator | Sunday 01 March 2026 00:55:48 +0000 (0:00:01.529) 0:04:01.559 ********** 2026-03-01 00:58:04.949699 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.949705 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.949711 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.949718 | orchestrator | 2026-03-01 00:58:04.949724 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-01 00:58:04.949730 | orchestrator | Sunday 01 March 2026 00:55:50 +0000 (0:00:02.006) 0:04:03.565 ********** 2026-03-01 00:58:04.949797 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.949814 | orchestrator | 2026-03-01 00:58:04.949817 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-01 00:58:04.949826 | orchestrator | Sunday 01 March 2026 00:55:51 +0000 (0:00:01.325) 0:04:04.891 ********** 2026-03-01 00:58:04.949830 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-01 00:58:04.949835 | orchestrator | 2026-03-01 00:58:04.949838 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-01 00:58:04.949842 | orchestrator | Sunday 01 March 2026 00:55:52 +0000 (0:00:00.768) 0:04:05.659 ********** 2026-03-01 00:58:04.949847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-01 00:58:04.949851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-01 00:58:04.949855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-01 00:58:04.949859 | orchestrator | 2026-03-01 00:58:04.949863 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-01 00:58:04.949868 | orchestrator | Sunday 01 March 2026 00:55:56 +0000 (0:00:03.880) 0:04:09.539 ********** 2026-03-01 00:58:04.949872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.949876 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.949879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.949883 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.949909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.949918 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.949922 | orchestrator | 2026-03-01 00:58:04.949925 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-01 00:58:04.949929 | orchestrator | Sunday 01 March 2026 00:55:57 +0000 (0:00:00.930) 0:04:10.470 ********** 2026-03-01 00:58:04.949933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-01 00:58:04.949938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-01 00:58:04.949942 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.949964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-01 00:58:04.949968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-01 00:58:04.949972 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.949976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-01 00:58:04.949980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-01 00:58:04.949984 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.949987 | orchestrator | 2026-03-01 00:58:04.949991 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-01 00:58:04.949995 | orchestrator | Sunday 01 March 2026 00:55:58 +0000 (0:00:01.341) 0:04:11.811 ********** 2026-03-01 00:58:04.949999 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.950002 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.950006 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.950010 | orchestrator | 2026-03-01 00:58:04.950035 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-01 00:58:04.950039 | orchestrator | Sunday 01 March 2026 00:56:01 +0000 (0:00:02.382) 0:04:14.194 ********** 2026-03-01 00:58:04.950043 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.950046 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.950050 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.950054 | orchestrator | 2026-03-01 00:58:04.950058 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-01 00:58:04.950061 | orchestrator | Sunday 01 March 2026 00:56:04 +0000 (0:00:02.879) 0:04:17.074 ********** 2026-03-01 00:58:04.950066 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-01 00:58:04.950070 | orchestrator | 2026-03-01 00:58:04.950074 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-01 00:58:04.950077 | orchestrator | Sunday 01 March 2026 00:56:05 +0000 (0:00:01.147) 0:04:18.221 ********** 2026-03-01 00:58:04.950082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.950089 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.950115 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.950123 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950127 | orchestrator | 2026-03-01 00:58:04.950131 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-01 00:58:04.950135 | orchestrator | Sunday 01 March 2026 00:56:06 +0000 (0:00:01.089) 0:04:19.311 ********** 2026-03-01 00:58:04.950139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.950142 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.950151 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-01 00:58:04.950164 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950170 | orchestrator | 2026-03-01 00:58:04.950176 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-01 00:58:04.950182 | orchestrator | Sunday 01 March 2026 00:56:07 +0000 (0:00:01.106) 0:04:20.417 ********** 2026-03-01 00:58:04.950192 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950198 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950205 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950211 | orchestrator | 2026-03-01 00:58:04.950217 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-01 00:58:04.950224 | orchestrator | Sunday 01 March 2026 00:56:08 +0000 (0:00:01.490) 0:04:21.907 ********** 2026-03-01 00:58:04.950230 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.950235 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.950238 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.950242 | orchestrator | 2026-03-01 00:58:04.950246 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-01 00:58:04.950250 | orchestrator | Sunday 01 March 2026 00:56:11 +0000 (0:00:02.309) 0:04:24.217 ********** 2026-03-01 00:58:04.950254 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.950257 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.950261 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.950265 | orchestrator | 2026-03-01 00:58:04.950268 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-01 00:58:04.950272 | orchestrator | Sunday 01 March 2026 00:56:13 +0000 (0:00:02.581) 0:04:26.799 ********** 2026-03-01 00:58:04.950276 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-01 00:58:04.950280 | orchestrator | 2026-03-01 00:58:04.950284 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-01 00:58:04.950287 | orchestrator | Sunday 01 March 2026 00:56:14 +0000 (0:00:00.806) 0:04:27.606 ********** 2026-03-01 00:58:04.950308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-01 00:58:04.950313 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-01 00:58:04.950321 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-01 00:58:04.950329 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950333 | orchestrator | 2026-03-01 00:58:04.950336 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-01 00:58:04.950340 | orchestrator | Sunday 01 March 2026 00:56:15 +0000 (0:00:01.300) 0:04:28.906 ********** 2026-03-01 00:58:04.950344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-01 00:58:04.950362 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-01 00:58:04.950370 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-01 00:58:04.950378 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950382 | orchestrator | 2026-03-01 00:58:04.950385 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-01 00:58:04.950389 | orchestrator | Sunday 01 March 2026 00:56:17 +0000 (0:00:01.799) 0:04:30.705 ********** 2026-03-01 00:58:04.950393 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950397 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950400 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950404 | orchestrator | 2026-03-01 00:58:04.950408 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-01 00:58:04.950412 | orchestrator | Sunday 01 March 2026 00:56:19 +0000 (0:00:01.379) 0:04:32.085 ********** 2026-03-01 00:58:04.950415 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.950431 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.950437 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.950441 | orchestrator | 2026-03-01 00:58:04.950445 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-01 00:58:04.950449 | orchestrator | Sunday 01 March 2026 00:56:21 +0000 (0:00:02.322) 0:04:34.408 ********** 2026-03-01 00:58:04.950452 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.950456 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.950460 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.950463 | orchestrator | 2026-03-01 00:58:04.950467 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-01 00:58:04.950471 | orchestrator | Sunday 01 March 2026 00:56:24 +0000 (0:00:03.282) 0:04:37.690 ********** 2026-03-01 00:58:04.950475 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.950478 | orchestrator | 2026-03-01 00:58:04.950482 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-01 00:58:04.950486 | orchestrator | Sunday 01 March 2026 00:56:26 +0000 (0:00:01.545) 0:04:39.236 ********** 2026-03-01 00:58:04.950490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.950499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 00:58:04.950503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.950529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.950534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 00:58:04.950544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.950556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.950573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 00:58:04.950578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.950595 | orchestrator | 2026-03-01 00:58:04.950599 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-01 00:58:04.950603 | orchestrator | Sunday 01 March 2026 00:56:29 +0000 (0:00:03.528) 0:04:42.764 ********** 2026-03-01 00:58:04.950607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.950611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 00:58:04.950628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.950644 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.950652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 00:58:04.950655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.950685 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.950693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 00:58:04.950696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 00:58:04.950715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 00:58:04.950722 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950726 | orchestrator | 2026-03-01 00:58:04.950734 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-01 00:58:04.950738 | orchestrator | Sunday 01 March 2026 00:56:30 +0000 (0:00:00.767) 0:04:43.532 ********** 2026-03-01 00:58:04.950742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-01 00:58:04.950746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-01 00:58:04.950750 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.950754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-01 00:58:04.950758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-01 00:58:04.950762 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.950765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-01 00:58:04.950769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-01 00:58:04.950773 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.950777 | orchestrator | 2026-03-01 00:58:04.950781 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-01 00:58:04.950785 | orchestrator | Sunday 01 March 2026 00:56:32 +0000 (0:00:01.624) 0:04:45.156 ********** 2026-03-01 00:58:04.950791 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.950797 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.950804 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.950811 | orchestrator | 2026-03-01 00:58:04.950817 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-01 00:58:04.950824 | orchestrator | Sunday 01 March 2026 00:56:33 +0000 (0:00:01.321) 0:04:46.478 ********** 2026-03-01 00:58:04.950831 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.950836 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.950843 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.950850 | orchestrator | 2026-03-01 00:58:04.950856 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-01 00:58:04.950863 | orchestrator | Sunday 01 March 2026 00:56:35 +0000 (0:00:02.170) 0:04:48.648 ********** 2026-03-01 00:58:04.950869 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.950876 | orchestrator | 2026-03-01 00:58:04.950883 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-01 00:58:04.950891 | orchestrator | Sunday 01 March 2026 00:56:37 +0000 (0:00:01.591) 0:04:50.239 ********** 2026-03-01 00:58:04.950899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 00:58:04.950935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 00:58:04.950983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 00:58:04.950992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 00:58:04.950997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 00:58:04.951018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 00:58:04.951027 | orchestrator | 2026-03-01 00:58:04.951031 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-01 00:58:04.951035 | orchestrator | Sunday 01 March 2026 00:56:42 +0000 (0:00:05.074) 0:04:55.314 ********** 2026-03-01 00:58:04.951039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 00:58:04.951044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 00:58:04.951048 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 00:58:04.951060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 00:58:04.951075 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 00:58:04.951086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 00:58:04.951090 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951094 | orchestrator | 2026-03-01 00:58:04.951098 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-01 00:58:04.951101 | orchestrator | Sunday 01 March 2026 00:56:43 +0000 (0:00:00.664) 0:04:55.979 ********** 2026-03-01 00:58:04.951105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-01 00:58:04.951110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-01 00:58:04.951114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-01 00:58:04.951122 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-01 00:58:04.951130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-01 00:58:04.951134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-01 00:58:04.951137 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-01 00:58:04.951145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-01 00:58:04.951173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-01 00:58:04.951184 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951191 | orchestrator | 2026-03-01 00:58:04.951196 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-01 00:58:04.951202 | orchestrator | Sunday 01 March 2026 00:56:43 +0000 (0:00:00.934) 0:04:56.913 ********** 2026-03-01 00:58:04.951207 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951213 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951219 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951224 | orchestrator | 2026-03-01 00:58:04.951230 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-01 00:58:04.951235 | orchestrator | Sunday 01 March 2026 00:56:44 +0000 (0:00:00.834) 0:04:57.748 ********** 2026-03-01 00:58:04.951241 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951247 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951253 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951258 | orchestrator | 2026-03-01 00:58:04.951264 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-01 00:58:04.951271 | orchestrator | Sunday 01 March 2026 00:56:46 +0000 (0:00:01.364) 0:04:59.112 ********** 2026-03-01 00:58:04.951276 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.951283 | orchestrator | 2026-03-01 00:58:04.951289 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-01 00:58:04.951295 | orchestrator | Sunday 01 March 2026 00:56:47 +0000 (0:00:01.519) 0:05:00.632 ********** 2026-03-01 00:58:04.951302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 00:58:04.951319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 00:58:04.951325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 00:58:04.951362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 00:58:04.951366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 00:58:04.951374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 00:58:04.951378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 00:58:04.951426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-01 00:58:04.951430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 00:58:04.951455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-01 00:58:04.951459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 00:58:04.951471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-01 00:58:04.951475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951502 | orchestrator | 2026-03-01 00:58:04.951506 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-01 00:58:04.951510 | orchestrator | Sunday 01 March 2026 00:56:52 +0000 (0:00:05.035) 0:05:05.667 ********** 2026-03-01 00:58:04.951518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-01 00:58:04.951522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 00:58:04.951526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-01 00:58:04.951547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-01 00:58:04.951559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-01 00:58:04.951570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 00:58:04.951577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951625 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-01 00:58:04.951644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-01 00:58:04.951655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951668 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-01 00:58:04.951689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 00:58:04.951693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-01 00:58:04.951714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-01 00:58:04.951724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 00:58:04.951732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 00:58:04.951736 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951740 | orchestrator | 2026-03-01 00:58:04.951744 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-01 00:58:04.951748 | orchestrator | Sunday 01 March 2026 00:56:53 +0000 (0:00:00.938) 0:05:06.606 ********** 2026-03-01 00:58:04.951752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-01 00:58:04.951756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-01 00:58:04.951761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-01 00:58:04.951766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-01 00:58:04.951770 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-01 00:58:04.951778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-01 00:58:04.951782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-01 00:58:04.951786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-01 00:58:04.951793 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-01 00:58:04.951806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-01 00:58:04.951810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-01 00:58:04.951814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-01 00:58:04.951818 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951821 | orchestrator | 2026-03-01 00:58:04.951825 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-01 00:58:04.951829 | orchestrator | Sunday 01 March 2026 00:56:54 +0000 (0:00:01.110) 0:05:07.716 ********** 2026-03-01 00:58:04.951833 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951837 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951840 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951844 | orchestrator | 2026-03-01 00:58:04.951848 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-01 00:58:04.951852 | orchestrator | Sunday 01 March 2026 00:56:55 +0000 (0:00:00.449) 0:05:08.166 ********** 2026-03-01 00:58:04.951855 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951859 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951863 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951867 | orchestrator | 2026-03-01 00:58:04.951870 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-01 00:58:04.951874 | orchestrator | Sunday 01 March 2026 00:56:56 +0000 (0:00:01.487) 0:05:09.654 ********** 2026-03-01 00:58:04.951878 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.951882 | orchestrator | 2026-03-01 00:58:04.951885 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-01 00:58:04.951889 | orchestrator | Sunday 01 March 2026 00:56:58 +0000 (0:00:01.766) 0:05:11.420 ********** 2026-03-01 00:58:04.951893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:58:04.951902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:58:04.951911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-01 00:58:04.951916 | orchestrator | 2026-03-01 00:58:04.951920 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-01 00:58:04.951924 | orchestrator | Sunday 01 March 2026 00:57:01 +0000 (0:00:02.845) 0:05:14.266 ********** 2026-03-01 00:58:04.951927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-01 00:58:04.951932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-01 00:58:04.951939 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.951943 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.951976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-01 00:58:04.951980 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.951984 | orchestrator | 2026-03-01 00:58:04.951988 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-01 00:58:04.951994 | orchestrator | Sunday 01 March 2026 00:57:02 +0000 (0:00:00.768) 0:05:15.035 ********** 2026-03-01 00:58:04.952002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-01 00:58:04.952006 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-01 00:58:04.952014 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-01 00:58:04.952021 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952025 | orchestrator | 2026-03-01 00:58:04.952029 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-01 00:58:04.952032 | orchestrator | Sunday 01 March 2026 00:57:02 +0000 (0:00:00.662) 0:05:15.697 ********** 2026-03-01 00:58:04.952036 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952040 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952044 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952047 | orchestrator | 2026-03-01 00:58:04.952051 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-01 00:58:04.952055 | orchestrator | Sunday 01 March 2026 00:57:03 +0000 (0:00:00.454) 0:05:16.152 ********** 2026-03-01 00:58:04.952059 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952063 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952066 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952070 | orchestrator | 2026-03-01 00:58:04.952074 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-01 00:58:04.952078 | orchestrator | Sunday 01 March 2026 00:57:04 +0000 (0:00:01.522) 0:05:17.675 ********** 2026-03-01 00:58:04.952082 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 00:58:04.952085 | orchestrator | 2026-03-01 00:58:04.952089 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-01 00:58:04.952093 | orchestrator | Sunday 01 March 2026 00:57:06 +0000 (0:00:01.722) 0:05:19.397 ********** 2026-03-01 00:58:04.952097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.952105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.952114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.952119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.952124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.952131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-01 00:58:04.952135 | orchestrator | 2026-03-01 00:58:04.952139 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-01 00:58:04.952143 | orchestrator | Sunday 01 March 2026 00:57:12 +0000 (0:00:05.597) 0:05:24.994 ********** 2026-03-01 00:58:04.952151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.952156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.952159 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.952170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.952174 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.952188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-01 00:58:04.952192 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952196 | orchestrator | 2026-03-01 00:58:04.952200 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-01 00:58:04.952204 | orchestrator | Sunday 01 March 2026 00:57:12 +0000 (0:00:00.561) 0:05:25.556 ********** 2026-03-01 00:58:04.952208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952227 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952247 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-01 00:58:04.952266 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952269 | orchestrator | 2026-03-01 00:58:04.952273 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-01 00:58:04.952277 | orchestrator | Sunday 01 March 2026 00:57:13 +0000 (0:00:01.310) 0:05:26.867 ********** 2026-03-01 00:58:04.952281 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.952285 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.952289 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.952293 | orchestrator | 2026-03-01 00:58:04.952299 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-01 00:58:04.952309 | orchestrator | Sunday 01 March 2026 00:57:15 +0000 (0:00:01.184) 0:05:28.051 ********** 2026-03-01 00:58:04.952319 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.952325 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.952332 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.952338 | orchestrator | 2026-03-01 00:58:04.952343 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-01 00:58:04.952349 | orchestrator | Sunday 01 March 2026 00:57:17 +0000 (0:00:01.921) 0:05:29.972 ********** 2026-03-01 00:58:04.952361 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952367 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952373 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952378 | orchestrator | 2026-03-01 00:58:04.952385 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-01 00:58:04.952391 | orchestrator | Sunday 01 March 2026 00:57:17 +0000 (0:00:00.301) 0:05:30.274 ********** 2026-03-01 00:58:04.952397 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952403 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952410 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952416 | orchestrator | 2026-03-01 00:58:04.952424 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-01 00:58:04.952430 | orchestrator | Sunday 01 March 2026 00:57:17 +0000 (0:00:00.311) 0:05:30.586 ********** 2026-03-01 00:58:04.952438 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952445 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952451 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952457 | orchestrator | 2026-03-01 00:58:04.952463 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-01 00:58:04.952468 | orchestrator | Sunday 01 March 2026 00:57:18 +0000 (0:00:00.613) 0:05:31.200 ********** 2026-03-01 00:58:04.952474 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952480 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952487 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952493 | orchestrator | 2026-03-01 00:58:04.952499 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-01 00:58:04.952505 | orchestrator | Sunday 01 March 2026 00:57:18 +0000 (0:00:00.316) 0:05:31.517 ********** 2026-03-01 00:58:04.952511 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952517 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952523 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952529 | orchestrator | 2026-03-01 00:58:04.952536 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-01 00:58:04.952542 | orchestrator | Sunday 01 March 2026 00:57:18 +0000 (0:00:00.280) 0:05:31.797 ********** 2026-03-01 00:58:04.952549 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.952555 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.952561 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.952567 | orchestrator | 2026-03-01 00:58:04.952574 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-01 00:58:04.952581 | orchestrator | Sunday 01 March 2026 00:57:19 +0000 (0:00:00.685) 0:05:32.483 ********** 2026-03-01 00:58:04.952587 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952594 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952601 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952607 | orchestrator | 2026-03-01 00:58:04.952614 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-01 00:58:04.952620 | orchestrator | Sunday 01 March 2026 00:57:20 +0000 (0:00:00.635) 0:05:33.119 ********** 2026-03-01 00:58:04.952627 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952634 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952640 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952647 | orchestrator | 2026-03-01 00:58:04.952654 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-01 00:58:04.952661 | orchestrator | Sunday 01 March 2026 00:57:20 +0000 (0:00:00.367) 0:05:33.486 ********** 2026-03-01 00:58:04.952672 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952679 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952685 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952691 | orchestrator | 2026-03-01 00:58:04.952698 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-01 00:58:04.952704 | orchestrator | Sunday 01 March 2026 00:57:21 +0000 (0:00:00.916) 0:05:34.403 ********** 2026-03-01 00:58:04.952711 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952729 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952736 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952742 | orchestrator | 2026-03-01 00:58:04.952749 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-01 00:58:04.952755 | orchestrator | Sunday 01 March 2026 00:57:22 +0000 (0:00:01.427) 0:05:35.830 ********** 2026-03-01 00:58:04.952762 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952769 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952775 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952785 | orchestrator | 2026-03-01 00:58:04.952792 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-01 00:58:04.952799 | orchestrator | Sunday 01 March 2026 00:57:23 +0000 (0:00:01.102) 0:05:36.932 ********** 2026-03-01 00:58:04.952805 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.952812 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.952818 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.952824 | orchestrator | 2026-03-01 00:58:04.952830 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-01 00:58:04.952836 | orchestrator | Sunday 01 March 2026 00:57:28 +0000 (0:00:04.442) 0:05:41.375 ********** 2026-03-01 00:58:04.952846 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952855 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952862 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952869 | orchestrator | 2026-03-01 00:58:04.952876 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-01 00:58:04.952884 | orchestrator | Sunday 01 March 2026 00:57:32 +0000 (0:00:03.623) 0:05:44.999 ********** 2026-03-01 00:58:04.952894 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.952901 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.952908 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.952914 | orchestrator | 2026-03-01 00:58:04.952922 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-01 00:58:04.952929 | orchestrator | Sunday 01 March 2026 00:57:42 +0000 (0:00:10.719) 0:05:55.718 ********** 2026-03-01 00:58:04.952936 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.952973 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.952981 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.952986 | orchestrator | 2026-03-01 00:58:04.952992 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-01 00:58:04.952998 | orchestrator | Sunday 01 March 2026 00:57:47 +0000 (0:00:04.679) 0:06:00.397 ********** 2026-03-01 00:58:04.953004 | orchestrator | changed: [testbed-node-2] 2026-03-01 00:58:04.953010 | orchestrator | changed: [testbed-node-0] 2026-03-01 00:58:04.953017 | orchestrator | changed: [testbed-node-1] 2026-03-01 00:58:04.953024 | orchestrator | 2026-03-01 00:58:04.953030 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-01 00:58:04.953036 | orchestrator | Sunday 01 March 2026 00:57:56 +0000 (0:00:08.916) 0:06:09.313 ********** 2026-03-01 00:58:04.953042 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.953049 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.953056 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.953062 | orchestrator | 2026-03-01 00:58:04.953068 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-01 00:58:04.953075 | orchestrator | Sunday 01 March 2026 00:57:56 +0000 (0:00:00.350) 0:06:09.663 ********** 2026-03-01 00:58:04.953081 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.953139 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.953153 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.953158 | orchestrator | 2026-03-01 00:58:04.953164 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-01 00:58:04.953170 | orchestrator | Sunday 01 March 2026 00:57:57 +0000 (0:00:00.671) 0:06:10.335 ********** 2026-03-01 00:58:04.953176 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.953181 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.953191 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.953204 | orchestrator | 2026-03-01 00:58:04.953210 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-01 00:58:04.953217 | orchestrator | Sunday 01 March 2026 00:57:57 +0000 (0:00:00.362) 0:06:10.697 ********** 2026-03-01 00:58:04.953223 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.953230 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.953236 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.953242 | orchestrator | 2026-03-01 00:58:04.953248 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-01 00:58:04.953254 | orchestrator | Sunday 01 March 2026 00:57:58 +0000 (0:00:00.369) 0:06:11.067 ********** 2026-03-01 00:58:04.953260 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.953268 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.953272 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.953276 | orchestrator | 2026-03-01 00:58:04.953280 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-01 00:58:04.953284 | orchestrator | Sunday 01 March 2026 00:57:58 +0000 (0:00:00.374) 0:06:11.442 ********** 2026-03-01 00:58:04.953288 | orchestrator | skipping: [testbed-node-0] 2026-03-01 00:58:04.953291 | orchestrator | skipping: [testbed-node-1] 2026-03-01 00:58:04.953295 | orchestrator | skipping: [testbed-node-2] 2026-03-01 00:58:04.953299 | orchestrator | 2026-03-01 00:58:04.953303 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-01 00:58:04.953307 | orchestrator | Sunday 01 March 2026 00:57:58 +0000 (0:00:00.345) 0:06:11.787 ********** 2026-03-01 00:58:04.953311 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.953315 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.953319 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.953322 | orchestrator | 2026-03-01 00:58:04.953326 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-01 00:58:04.953330 | orchestrator | Sunday 01 March 2026 00:58:00 +0000 (0:00:01.388) 0:06:13.176 ********** 2026-03-01 00:58:04.953334 | orchestrator | ok: [testbed-node-0] 2026-03-01 00:58:04.953337 | orchestrator | ok: [testbed-node-1] 2026-03-01 00:58:04.953341 | orchestrator | ok: [testbed-node-2] 2026-03-01 00:58:04.953345 | orchestrator | 2026-03-01 00:58:04.953348 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 00:58:04.953353 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-01 00:58:04.953358 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-01 00:58:04.953361 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-01 00:58:04.953365 | orchestrator | 2026-03-01 00:58:04.953369 | orchestrator | 2026-03-01 00:58:04.953373 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 00:58:04.953376 | orchestrator | Sunday 01 March 2026 00:58:01 +0000 (0:00:00.875) 0:06:14.051 ********** 2026-03-01 00:58:04.953380 | orchestrator | =============================================================================== 2026-03-01 00:58:04.953384 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.72s 2026-03-01 00:58:04.953387 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.92s 2026-03-01 00:58:04.953391 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.84s 2026-03-01 00:58:04.953395 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.60s 2026-03-01 00:58:04.953399 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.20s 2026-03-01 00:58:04.953402 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.08s 2026-03-01 00:58:04.953406 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.04s 2026-03-01 00:58:04.953414 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.68s 2026-03-01 00:58:04.953418 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.57s 2026-03-01 00:58:04.953427 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.44s 2026-03-01 00:58:04.953435 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.23s 2026-03-01 00:58:04.953439 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.12s 2026-03-01 00:58:04.953442 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 4.05s 2026-03-01 00:58:04.953446 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.96s 2026-03-01 00:58:04.953450 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.94s 2026-03-01 00:58:04.953454 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.90s 2026-03-01 00:58:04.953458 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.88s 2026-03-01 00:58:04.953461 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.88s 2026-03-01 00:58:04.953465 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.81s 2026-03-01 00:58:04.953469 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.81s 2026-03-01 00:58:04.953473 | orchestrator | 2026-03-01 00:58:04 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:04.953477 | orchestrator | 2026-03-01 00:58:04 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:04.953480 | orchestrator | 2026-03-01 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:07.982663 | orchestrator | 2026-03-01 00:58:07 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:07.984070 | orchestrator | 2026-03-01 00:58:07 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:07.985428 | orchestrator | 2026-03-01 00:58:07 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:07.985488 | orchestrator | 2026-03-01 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:11.032639 | orchestrator | 2026-03-01 00:58:11 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:11.034804 | orchestrator | 2026-03-01 00:58:11 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:11.036617 | orchestrator | 2026-03-01 00:58:11 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:11.036690 | orchestrator | 2026-03-01 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:14.137014 | orchestrator | 2026-03-01 00:58:14 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:14.137071 | orchestrator | 2026-03-01 00:58:14 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:14.137077 | orchestrator | 2026-03-01 00:58:14 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:14.137081 | orchestrator | 2026-03-01 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:17.137656 | orchestrator | 2026-03-01 00:58:17 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:17.137743 | orchestrator | 2026-03-01 00:58:17 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:17.137828 | orchestrator | 2026-03-01 00:58:17 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:17.137836 | orchestrator | 2026-03-01 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:20.160862 | orchestrator | 2026-03-01 00:58:20 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:20.161957 | orchestrator | 2026-03-01 00:58:20 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:20.162575 | orchestrator | 2026-03-01 00:58:20 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:20.162608 | orchestrator | 2026-03-01 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:23.196150 | orchestrator | 2026-03-01 00:58:23 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:23.196412 | orchestrator | 2026-03-01 00:58:23 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:23.198328 | orchestrator | 2026-03-01 00:58:23 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:23.198367 | orchestrator | 2026-03-01 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:26.226912 | orchestrator | 2026-03-01 00:58:26 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:26.228154 | orchestrator | 2026-03-01 00:58:26 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:26.228721 | orchestrator | 2026-03-01 00:58:26 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:26.228759 | orchestrator | 2026-03-01 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:29.262624 | orchestrator | 2026-03-01 00:58:29 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:29.263272 | orchestrator | 2026-03-01 00:58:29 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:29.264483 | orchestrator | 2026-03-01 00:58:29 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:29.264525 | orchestrator | 2026-03-01 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:32.324708 | orchestrator | 2026-03-01 00:58:32 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:32.329745 | orchestrator | 2026-03-01 00:58:32 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:32.331547 | orchestrator | 2026-03-01 00:58:32 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:32.331602 | orchestrator | 2026-03-01 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:35.372613 | orchestrator | 2026-03-01 00:58:35 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:35.373399 | orchestrator | 2026-03-01 00:58:35 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:35.375174 | orchestrator | 2026-03-01 00:58:35 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:35.375239 | orchestrator | 2026-03-01 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:38.422805 | orchestrator | 2026-03-01 00:58:38 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:38.426467 | orchestrator | 2026-03-01 00:58:38 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:38.429132 | orchestrator | 2026-03-01 00:58:38 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:38.429196 | orchestrator | 2026-03-01 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:41.480870 | orchestrator | 2026-03-01 00:58:41 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:41.485800 | orchestrator | 2026-03-01 00:58:41 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:41.487846 | orchestrator | 2026-03-01 00:58:41 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:41.487888 | orchestrator | 2026-03-01 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:44.527819 | orchestrator | 2026-03-01 00:58:44 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:44.529652 | orchestrator | 2026-03-01 00:58:44 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:44.531612 | orchestrator | 2026-03-01 00:58:44 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:44.531658 | orchestrator | 2026-03-01 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:47.567178 | orchestrator | 2026-03-01 00:58:47 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:47.569122 | orchestrator | 2026-03-01 00:58:47 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:47.571464 | orchestrator | 2026-03-01 00:58:47 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:47.571531 | orchestrator | 2026-03-01 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:50.616884 | orchestrator | 2026-03-01 00:58:50 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:50.618158 | orchestrator | 2026-03-01 00:58:50 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:50.619547 | orchestrator | 2026-03-01 00:58:50 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:50.619589 | orchestrator | 2026-03-01 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:53.667766 | orchestrator | 2026-03-01 00:58:53 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:53.667870 | orchestrator | 2026-03-01 00:58:53 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:53.667881 | orchestrator | 2026-03-01 00:58:53 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:53.667977 | orchestrator | 2026-03-01 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:56.699959 | orchestrator | 2026-03-01 00:58:56 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:56.702556 | orchestrator | 2026-03-01 00:58:56 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:56.705036 | orchestrator | 2026-03-01 00:58:56 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:56.705155 | orchestrator | 2026-03-01 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:58:59.745285 | orchestrator | 2026-03-01 00:58:59 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:58:59.747634 | orchestrator | 2026-03-01 00:58:59 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:58:59.749833 | orchestrator | 2026-03-01 00:58:59 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:58:59.749943 | orchestrator | 2026-03-01 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:02.792489 | orchestrator | 2026-03-01 00:59:02 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:02.794478 | orchestrator | 2026-03-01 00:59:02 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:02.795663 | orchestrator | 2026-03-01 00:59:02 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:02.795718 | orchestrator | 2026-03-01 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:05.834011 | orchestrator | 2026-03-01 00:59:05 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:05.834289 | orchestrator | 2026-03-01 00:59:05 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:05.835259 | orchestrator | 2026-03-01 00:59:05 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:05.835290 | orchestrator | 2026-03-01 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:08.893130 | orchestrator | 2026-03-01 00:59:08 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:08.895086 | orchestrator | 2026-03-01 00:59:08 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:08.897104 | orchestrator | 2026-03-01 00:59:08 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:08.897174 | orchestrator | 2026-03-01 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:11.951749 | orchestrator | 2026-03-01 00:59:11 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:11.955801 | orchestrator | 2026-03-01 00:59:11 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:11.957023 | orchestrator | 2026-03-01 00:59:11 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:11.957818 | orchestrator | 2026-03-01 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:15.017640 | orchestrator | 2026-03-01 00:59:15 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:15.019183 | orchestrator | 2026-03-01 00:59:15 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:15.020826 | orchestrator | 2026-03-01 00:59:15 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:15.020928 | orchestrator | 2026-03-01 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:18.062843 | orchestrator | 2026-03-01 00:59:18 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:18.063344 | orchestrator | 2026-03-01 00:59:18 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:18.064370 | orchestrator | 2026-03-01 00:59:18 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:18.064406 | orchestrator | 2026-03-01 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:21.108076 | orchestrator | 2026-03-01 00:59:21 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:21.109455 | orchestrator | 2026-03-01 00:59:21 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:21.110449 | orchestrator | 2026-03-01 00:59:21 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:21.110484 | orchestrator | 2026-03-01 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:24.159215 | orchestrator | 2026-03-01 00:59:24 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:24.161564 | orchestrator | 2026-03-01 00:59:24 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:24.162962 | orchestrator | 2026-03-01 00:59:24 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:24.163080 | orchestrator | 2026-03-01 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:27.210356 | orchestrator | 2026-03-01 00:59:27 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:27.212708 | orchestrator | 2026-03-01 00:59:27 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:27.214417 | orchestrator | 2026-03-01 00:59:27 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:27.214478 | orchestrator | 2026-03-01 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:30.261095 | orchestrator | 2026-03-01 00:59:30 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:30.262331 | orchestrator | 2026-03-01 00:59:30 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:30.262689 | orchestrator | 2026-03-01 00:59:30 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:30.264044 | orchestrator | 2026-03-01 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:33.309730 | orchestrator | 2026-03-01 00:59:33 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:33.310980 | orchestrator | 2026-03-01 00:59:33 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:33.312399 | orchestrator | 2026-03-01 00:59:33 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:33.312446 | orchestrator | 2026-03-01 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:36.360767 | orchestrator | 2026-03-01 00:59:36 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:36.363233 | orchestrator | 2026-03-01 00:59:36 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:36.365520 | orchestrator | 2026-03-01 00:59:36 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:36.365649 | orchestrator | 2026-03-01 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:39.402575 | orchestrator | 2026-03-01 00:59:39 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:39.404426 | orchestrator | 2026-03-01 00:59:39 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:39.405791 | orchestrator | 2026-03-01 00:59:39 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:39.405891 | orchestrator | 2026-03-01 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:42.451249 | orchestrator | 2026-03-01 00:59:42 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:42.452422 | orchestrator | 2026-03-01 00:59:42 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:42.453357 | orchestrator | 2026-03-01 00:59:42 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:42.453487 | orchestrator | 2026-03-01 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:45.493885 | orchestrator | 2026-03-01 00:59:45 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:45.495965 | orchestrator | 2026-03-01 00:59:45 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:45.497451 | orchestrator | 2026-03-01 00:59:45 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:45.497542 | orchestrator | 2026-03-01 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:48.534959 | orchestrator | 2026-03-01 00:59:48 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:48.536659 | orchestrator | 2026-03-01 00:59:48 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:48.538573 | orchestrator | 2026-03-01 00:59:48 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:48.538679 | orchestrator | 2026-03-01 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:51.583271 | orchestrator | 2026-03-01 00:59:51 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:51.585474 | orchestrator | 2026-03-01 00:59:51 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:51.587242 | orchestrator | 2026-03-01 00:59:51 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:51.587275 | orchestrator | 2026-03-01 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:54.637335 | orchestrator | 2026-03-01 00:59:54 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:54.637407 | orchestrator | 2026-03-01 00:59:54 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:54.640593 | orchestrator | 2026-03-01 00:59:54 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:54.640675 | orchestrator | 2026-03-01 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 00:59:57.696410 | orchestrator | 2026-03-01 00:59:57 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 00:59:57.698458 | orchestrator | 2026-03-01 00:59:57 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 00:59:57.700638 | orchestrator | 2026-03-01 00:59:57 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 00:59:57.700692 | orchestrator | 2026-03-01 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:00.749364 | orchestrator | 2026-03-01 01:00:00 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:00.752180 | orchestrator | 2026-03-01 01:00:00 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:00.752778 | orchestrator | 2026-03-01 01:00:00 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state STARTED 2026-03-01 01:00:00.752858 | orchestrator | 2026-03-01 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:03.799376 | orchestrator | 2026-03-01 01:00:03 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:03.802231 | orchestrator | 2026-03-01 01:00:03 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:03.808182 | orchestrator | 2026-03-01 01:00:03 | INFO  | Task 3fb90159-2220-4b94-8062-198c57f91531 is in state SUCCESS 2026-03-01 01:00:03.808548 | orchestrator | 2026-03-01 01:00:03.810169 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-01 01:00:03.810222 | orchestrator | 2.16.14 2026-03-01 01:00:03.810231 | orchestrator | 2026-03-01 01:00:03.810239 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-01 01:00:03.810244 | orchestrator | 2026-03-01 01:00:03.810248 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-01 01:00:03.810252 | orchestrator | Sunday 01 March 2026 00:49:25 +0000 (0:00:00.664) 0:00:00.664 ********** 2026-03-01 01:00:03.810269 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.810274 | orchestrator | 2026-03-01 01:00:03.810278 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-01 01:00:03.810282 | orchestrator | Sunday 01 March 2026 00:49:26 +0000 (0:00:00.868) 0:00:01.533 ********** 2026-03-01 01:00:03.810286 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810290 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810294 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810297 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810301 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810305 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810308 | orchestrator | 2026-03-01 01:00:03.810312 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-01 01:00:03.810316 | orchestrator | Sunday 01 March 2026 00:49:28 +0000 (0:00:01.447) 0:00:02.980 ********** 2026-03-01 01:00:03.810320 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810323 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810327 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810331 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810337 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810343 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810349 | orchestrator | 2026-03-01 01:00:03.810355 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-01 01:00:03.810361 | orchestrator | Sunday 01 March 2026 00:49:29 +0000 (0:00:00.844) 0:00:03.824 ********** 2026-03-01 01:00:03.810367 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810373 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810379 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810386 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810390 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810393 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810397 | orchestrator | 2026-03-01 01:00:03.810401 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-01 01:00:03.810412 | orchestrator | Sunday 01 March 2026 00:49:30 +0000 (0:00:01.004) 0:00:04.829 ********** 2026-03-01 01:00:03.810416 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810420 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810424 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810427 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810431 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810435 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810438 | orchestrator | 2026-03-01 01:00:03.810442 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-01 01:00:03.810446 | orchestrator | Sunday 01 March 2026 00:49:30 +0000 (0:00:00.588) 0:00:05.418 ********** 2026-03-01 01:00:03.810450 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810453 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810457 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810461 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810476 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810482 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810488 | orchestrator | 2026-03-01 01:00:03.810494 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-01 01:00:03.810501 | orchestrator | Sunday 01 March 2026 00:49:31 +0000 (0:00:00.579) 0:00:05.997 ********** 2026-03-01 01:00:03.810508 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810514 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810523 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810533 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810539 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810545 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810551 | orchestrator | 2026-03-01 01:00:03.810557 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-01 01:00:03.810564 | orchestrator | Sunday 01 March 2026 00:49:31 +0000 (0:00:00.619) 0:00:06.619 ********** 2026-03-01 01:00:03.810578 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.810587 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.810593 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.810599 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.810605 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.810615 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.810622 | orchestrator | 2026-03-01 01:00:03.810628 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-01 01:00:03.810633 | orchestrator | Sunday 01 March 2026 00:49:32 +0000 (0:00:00.596) 0:00:07.216 ********** 2026-03-01 01:00:03.810639 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810645 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810651 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810657 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810663 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810669 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810676 | orchestrator | 2026-03-01 01:00:03.810681 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-01 01:00:03.810684 | orchestrator | Sunday 01 March 2026 00:49:33 +0000 (0:00:00.773) 0:00:07.989 ********** 2026-03-01 01:00:03.810688 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:00:03.810692 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:00:03.810696 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:00:03.810700 | orchestrator | 2026-03-01 01:00:03.810704 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-01 01:00:03.810708 | orchestrator | Sunday 01 March 2026 00:49:33 +0000 (0:00:00.514) 0:00:08.504 ********** 2026-03-01 01:00:03.810712 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.810715 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.810719 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.810735 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.810742 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.810748 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.810755 | orchestrator | 2026-03-01 01:00:03.810761 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-01 01:00:03.810768 | orchestrator | Sunday 01 March 2026 00:49:35 +0000 (0:00:01.339) 0:00:09.844 ********** 2026-03-01 01:00:03.810774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:00:03.810779 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:00:03.810784 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:00:03.810788 | orchestrator | 2026-03-01 01:00:03.810793 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-01 01:00:03.810814 | orchestrator | Sunday 01 March 2026 00:49:37 +0000 (0:00:02.760) 0:00:12.605 ********** 2026-03-01 01:00:03.810821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-01 01:00:03.810827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-01 01:00:03.810833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-01 01:00:03.810839 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.810845 | orchestrator | 2026-03-01 01:00:03.810851 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-01 01:00:03.810857 | orchestrator | Sunday 01 March 2026 00:49:38 +0000 (0:00:00.911) 0:00:13.516 ********** 2026-03-01 01:00:03.810864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.810873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.810889 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.810897 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.810903 | orchestrator | 2026-03-01 01:00:03.810909 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-01 01:00:03.810916 | orchestrator | Sunday 01 March 2026 00:49:39 +0000 (0:00:01.073) 0:00:14.589 ********** 2026-03-01 01:00:03.810925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.810934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.810941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.810948 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.810954 | orchestrator | 2026-03-01 01:00:03.810973 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-01 01:00:03.810980 | orchestrator | Sunday 01 March 2026 00:49:40 +0000 (0:00:00.433) 0:00:15.022 ********** 2026-03-01 01:00:03.810993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-01 00:49:35.713524', 'end': '2026-03-01 00:49:35.792233', 'delta': '0:00:00.078709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.811000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-01 00:49:36.582719', 'end': '2026-03-01 00:49:36.668727', 'delta': '0:00:00.086008', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.811005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-01 00:49:37.595141', 'end': '2026-03-01 00:49:37.714903', 'delta': '0:00:00.119762', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.811015 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811019 | orchestrator | 2026-03-01 01:00:03.811027 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-01 01:00:03.811032 | orchestrator | Sunday 01 March 2026 00:49:40 +0000 (0:00:00.554) 0:00:15.577 ********** 2026-03-01 01:00:03.811036 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.811040 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.811044 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.811048 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.811052 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.811058 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.811064 | orchestrator | 2026-03-01 01:00:03.811070 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-01 01:00:03.811076 | orchestrator | Sunday 01 March 2026 00:49:42 +0000 (0:00:01.617) 0:00:17.194 ********** 2026-03-01 01:00:03.811082 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.811089 | orchestrator | 2026-03-01 01:00:03.811095 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-01 01:00:03.811101 | orchestrator | Sunday 01 March 2026 00:49:43 +0000 (0:00:00.692) 0:00:17.886 ********** 2026-03-01 01:00:03.811107 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811113 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811119 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811126 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811131 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811137 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811143 | orchestrator | 2026-03-01 01:00:03.811149 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-01 01:00:03.811156 | orchestrator | Sunday 01 March 2026 00:49:45 +0000 (0:00:02.123) 0:00:20.009 ********** 2026-03-01 01:00:03.811162 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811168 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811175 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811181 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811188 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811195 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811201 | orchestrator | 2026-03-01 01:00:03.811207 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-01 01:00:03.811214 | orchestrator | Sunday 01 March 2026 00:49:47 +0000 (0:00:02.363) 0:00:22.373 ********** 2026-03-01 01:00:03.811220 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811227 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811232 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811235 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811239 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811243 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811247 | orchestrator | 2026-03-01 01:00:03.811251 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-01 01:00:03.811258 | orchestrator | Sunday 01 March 2026 00:49:49 +0000 (0:00:01.464) 0:00:23.838 ********** 2026-03-01 01:00:03.811265 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811272 | orchestrator | 2026-03-01 01:00:03.811278 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-01 01:00:03.811290 | orchestrator | Sunday 01 March 2026 00:49:49 +0000 (0:00:00.198) 0:00:24.036 ********** 2026-03-01 01:00:03.811297 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811304 | orchestrator | 2026-03-01 01:00:03.811310 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-01 01:00:03.811316 | orchestrator | Sunday 01 March 2026 00:49:49 +0000 (0:00:00.582) 0:00:24.619 ********** 2026-03-01 01:00:03.811323 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811329 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811336 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811347 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811354 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811360 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811366 | orchestrator | 2026-03-01 01:00:03.811373 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-01 01:00:03.811380 | orchestrator | Sunday 01 March 2026 00:49:50 +0000 (0:00:00.875) 0:00:25.495 ********** 2026-03-01 01:00:03.811386 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811392 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811399 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811405 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811411 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811415 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811419 | orchestrator | 2026-03-01 01:00:03.811423 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-01 01:00:03.811426 | orchestrator | Sunday 01 March 2026 00:49:52 +0000 (0:00:01.744) 0:00:27.239 ********** 2026-03-01 01:00:03.811430 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811434 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811438 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811442 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811445 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811449 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811453 | orchestrator | 2026-03-01 01:00:03.811457 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-01 01:00:03.811460 | orchestrator | Sunday 01 March 2026 00:49:53 +0000 (0:00:00.640) 0:00:27.880 ********** 2026-03-01 01:00:03.811464 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811468 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811471 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811475 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811479 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811483 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811486 | orchestrator | 2026-03-01 01:00:03.811492 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-01 01:00:03.811508 | orchestrator | Sunday 01 March 2026 00:49:53 +0000 (0:00:00.832) 0:00:28.712 ********** 2026-03-01 01:00:03.811514 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811525 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811532 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811541 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811549 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811555 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811561 | orchestrator | 2026-03-01 01:00:03.811571 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-01 01:00:03.811577 | orchestrator | Sunday 01 March 2026 00:49:54 +0000 (0:00:00.795) 0:00:29.508 ********** 2026-03-01 01:00:03.811583 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811589 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811596 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811602 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811609 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811618 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811631 | orchestrator | 2026-03-01 01:00:03.811636 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-01 01:00:03.811643 | orchestrator | Sunday 01 March 2026 00:49:55 +0000 (0:00:00.839) 0:00:30.348 ********** 2026-03-01 01:00:03.811648 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.811654 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.811661 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.811666 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.811672 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.811678 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.811683 | orchestrator | 2026-03-01 01:00:03.811689 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-01 01:00:03.811695 | orchestrator | Sunday 01 March 2026 00:49:56 +0000 (0:00:00.936) 0:00:31.285 ********** 2026-03-01 01:00:03.811702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272', 'dm-uuid-LVM-ZJGtCQF6v1S5Yu9yuOCiJGbLXIQrfHttVCKEY7DBdmSjbodhQeQY4g11ngYvfdI2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3', 'dm-uuid-LVM-LX5TJN4QJIjZNUTehHp2O357487HqAP19VEUTo3ChWgOBrMUqm1cCb2jg3r9YlwW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63', 'dm-uuid-LVM-mzgyAp4vw7ckb27duHzddo8Zn4qMBdwmux1G1ZIIWexZmaJzgKAreEkfySOmlweu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.811793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007', 'dm-uuid-LVM-v3o82Fgeuju9hDVXfQOZ5UaNsxeSpxK77njOjMRF9KSq1HMBwaxDGNX3CKMtxMIe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.813718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.813766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.813774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zdGVLJ-V5gm-IqSV-fzjR-xrHd-FM9P-oMCgkd', 'scsi-0QEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0', 'scsi-SQEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.813781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.813843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pQiiT0-1fQr-kPce-rgfU-KeAC-vxST-Vg7e3r', 'scsi-0QEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec', 'scsi-SQEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.813855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.813918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17', 'scsi-SQEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.813932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.813938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.813944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.815690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VT6Hxk-OOUG-qeJQ-fb6b-cwwz-OqYZ-9TvXjl', 'scsi-0QEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389', 'scsi-SQEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.815695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UW0p8v-atpJ-tsfM-QCHF-dCbs-8Eoy-5fYRbS', 'scsi-0QEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6', 'scsi-SQEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.815700 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.815718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d', 'dm-uuid-LVM-wRmWDUNJPt67ozAI6V0Iyirq37GUM3D562kx6TYQ4CSJ4UCLlAwmSFyH40byWHN1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c', 'scsi-SQEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.815741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89', 'dm-uuid-LVM-2Ox7t6bo83O9jU0axPebCrOBB156JJHk65EARBuFNKoVl8g7TRHbfBMQXS65kqKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.815755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815973 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.815981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.815993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IIKyfs-DDRe-vOw6-n6TR-1J1I-YxeN-dtTeK0', 'scsi-0QEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa', 'scsi-SQEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R0IrbS-6ZVu-oH9t-3sKs-1lcJ-pg5J-AZQ5u1', 'scsi-0QEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7', 'scsi-SQEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c', 'scsi-SQEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.816574 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.816581 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.816587 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.816594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.816711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.817195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:00:03.817787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part1', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part14', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part15', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part16', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.817888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:00:03.817901 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.817908 | orchestrator | 2026-03-01 01:00:03.817915 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-01 01:00:03.817922 | orchestrator | Sunday 01 March 2026 00:49:57 +0000 (0:00:01.429) 0:00:32.714 ********** 2026-03-01 01:00:03.817927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272', 'dm-uuid-LVM-ZJGtCQF6v1S5Yu9yuOCiJGbLXIQrfHttVCKEY7DBdmSjbodhQeQY4g11ngYvfdI2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.817938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3', 'dm-uuid-LVM-LX5TJN4QJIjZNUTehHp2O357487HqAP19VEUTo3ChWgOBrMUqm1cCb2jg3r9YlwW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.817942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.817947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.817956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.817998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63', 'dm-uuid-LVM-mzgyAp4vw7ckb27duHzddo8Zn4qMBdwmux1G1ZIIWexZmaJzgKAreEkfySOmlweu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818058 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007', 'dm-uuid-LVM-v3o82Fgeuju9hDVXfQOZ5UaNsxeSpxK77njOjMRF9KSq1HMBwaxDGNX3CKMtxMIe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818164 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818175 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VT6Hxk-OOUG-qeJQ-fb6b-cwwz-OqYZ-9TvXjl', 'scsi-0QEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389', 'scsi-SQEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zdGVLJ-V5gm-IqSV-fzjR-xrHd-FM9P-oMCgkd', 'scsi-0QEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0', 'scsi-SQEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UW0p8v-atpJ-tsfM-QCHF-dCbs-8Eoy-5fYRbS', 'scsi-0QEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6', 'scsi-SQEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c', 'scsi-SQEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pQiiT0-1fQr-kPce-rgfU-KeAC-vxST-Vg7e3r', 'scsi-0QEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec', 'scsi-SQEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17', 'scsi-SQEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818452 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d', 'dm-uuid-LVM-wRmWDUNJPt67ozAI6V0Iyirq37GUM3D562kx6TYQ4CSJ4UCLlAwmSFyH40byWHN1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818481 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.818496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89', 'dm-uuid-LVM-2Ox7t6bo83O9jU0axPebCrOBB156JJHk65EARBuFNKoVl8g7TRHbfBMQXS65kqKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818504 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818515 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818634 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818763 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818783 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IIKyfs-DDRe-vOw6-n6TR-1J1I-YxeN-dtTeK0', 'scsi-0QEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa', 'scsi-SQEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R0IrbS-6ZVu-oH9t-3sKs-1lcJ-pg5J-AZQ5u1', 'scsi-0QEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7', 'scsi-SQEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c', 'scsi-SQEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818885 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818892 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.818928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818935 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.818946 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0c2808e-98cb-491a-8a34-4e9503ad7b60-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819018 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819029 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819036 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819047 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819062 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819068 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819120 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819129 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819136 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b16b83d-56b3-4b94-b113-6fb31fe8cad7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819187 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.819196 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.819203 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.819210 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819217 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819264 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819275 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819279 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819283 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819341 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819352 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819360 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part1', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part14', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part15', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part16', 'scsi-SQEMU_QEMU_HARDDISK_96df6614-cdd9-4e86-8384-63e48cc6d403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819375 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:00:03.819381 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.819391 | orchestrator | 2026-03-01 01:00:03.819444 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-01 01:00:03.819453 | orchestrator | Sunday 01 March 2026 00:50:00 +0000 (0:00:02.325) 0:00:35.040 ********** 2026-03-01 01:00:03.819459 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.819466 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.819472 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.819477 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.819483 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.819500 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.819506 | orchestrator | 2026-03-01 01:00:03.819513 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-01 01:00:03.819519 | orchestrator | Sunday 01 March 2026 00:50:02 +0000 (0:00:02.670) 0:00:37.711 ********** 2026-03-01 01:00:03.819525 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.819533 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.819543 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.819556 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.819562 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.819567 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.819572 | orchestrator | 2026-03-01 01:00:03.819579 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-01 01:00:03.819584 | orchestrator | Sunday 01 March 2026 00:50:04 +0000 (0:00:01.153) 0:00:38.864 ********** 2026-03-01 01:00:03.819591 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.819598 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.819602 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.819605 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.819609 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.819613 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.819616 | orchestrator | 2026-03-01 01:00:03.819620 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-01 01:00:03.819624 | orchestrator | Sunday 01 March 2026 00:50:05 +0000 (0:00:00.979) 0:00:39.844 ********** 2026-03-01 01:00:03.819628 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.819631 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.819635 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.819639 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.819643 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.819646 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.819650 | orchestrator | 2026-03-01 01:00:03.819656 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-01 01:00:03.819662 | orchestrator | Sunday 01 March 2026 00:50:05 +0000 (0:00:00.646) 0:00:40.491 ********** 2026-03-01 01:00:03.819670 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.819677 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.819687 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.819693 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.819699 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.819705 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.819711 | orchestrator | 2026-03-01 01:00:03.819718 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-01 01:00:03.819725 | orchestrator | Sunday 01 March 2026 00:50:06 +0000 (0:00:01.182) 0:00:41.674 ********** 2026-03-01 01:00:03.819731 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.819737 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.819754 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.819760 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.819766 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.819771 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.819777 | orchestrator | 2026-03-01 01:00:03.819783 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-01 01:00:03.819789 | orchestrator | Sunday 01 March 2026 00:50:07 +0000 (0:00:00.767) 0:00:42.441 ********** 2026-03-01 01:00:03.819796 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-01 01:00:03.819828 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-01 01:00:03.819832 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-01 01:00:03.819835 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-01 01:00:03.819839 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-01 01:00:03.819843 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-01 01:00:03.819846 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-01 01:00:03.819850 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-01 01:00:03.819854 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-01 01:00:03.819858 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-01 01:00:03.819861 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-01 01:00:03.819865 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-01 01:00:03.819873 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-01 01:00:03.819877 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-01 01:00:03.819881 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-01 01:00:03.819884 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-01 01:00:03.819888 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-01 01:00:03.819892 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-01 01:00:03.819895 | orchestrator | 2026-03-01 01:00:03.819899 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-01 01:00:03.819903 | orchestrator | Sunday 01 March 2026 00:50:10 +0000 (0:00:03.083) 0:00:45.525 ********** 2026-03-01 01:00:03.819907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-01 01:00:03.819910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-01 01:00:03.819914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-01 01:00:03.819918 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.819921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-01 01:00:03.819925 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-01 01:00:03.819929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-01 01:00:03.819933 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.819936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-01 01:00:03.819963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-01 01:00:03.819968 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-01 01:00:03.819972 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.819975 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-01 01:00:03.819979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-01 01:00:03.819983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-01 01:00:03.819986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-01 01:00:03.819990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-01 01:00:03.819994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-01 01:00:03.819998 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820001 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820005 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-01 01:00:03.820009 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-01 01:00:03.820013 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-01 01:00:03.820016 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820020 | orchestrator | 2026-03-01 01:00:03.820024 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-01 01:00:03.820031 | orchestrator | Sunday 01 March 2026 00:50:11 +0000 (0:00:00.667) 0:00:46.192 ********** 2026-03-01 01:00:03.820038 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820044 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820050 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820058 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.820066 | orchestrator | 2026-03-01 01:00:03.820071 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-01 01:00:03.820076 | orchestrator | Sunday 01 March 2026 00:50:12 +0000 (0:00:01.070) 0:00:47.262 ********** 2026-03-01 01:00:03.820080 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820085 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820089 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820093 | orchestrator | 2026-03-01 01:00:03.820098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-01 01:00:03.820109 | orchestrator | Sunday 01 March 2026 00:50:13 +0000 (0:00:00.483) 0:00:47.747 ********** 2026-03-01 01:00:03.820113 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820117 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820122 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820126 | orchestrator | 2026-03-01 01:00:03.820131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-01 01:00:03.820135 | orchestrator | Sunday 01 March 2026 00:50:13 +0000 (0:00:00.518) 0:00:48.265 ********** 2026-03-01 01:00:03.820140 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820144 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820149 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820153 | orchestrator | 2026-03-01 01:00:03.820158 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-01 01:00:03.820163 | orchestrator | Sunday 01 March 2026 00:50:14 +0000 (0:00:00.543) 0:00:48.808 ********** 2026-03-01 01:00:03.820167 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820172 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820177 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820181 | orchestrator | 2026-03-01 01:00:03.820185 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-01 01:00:03.820190 | orchestrator | Sunday 01 March 2026 00:50:14 +0000 (0:00:00.638) 0:00:49.447 ********** 2026-03-01 01:00:03.820195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.820199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.820204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.820208 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820213 | orchestrator | 2026-03-01 01:00:03.820217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-01 01:00:03.820222 | orchestrator | Sunday 01 March 2026 00:50:15 +0000 (0:00:00.755) 0:00:50.202 ********** 2026-03-01 01:00:03.820226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.820231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.820235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.820239 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820243 | orchestrator | 2026-03-01 01:00:03.820248 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-01 01:00:03.820253 | orchestrator | Sunday 01 March 2026 00:50:16 +0000 (0:00:00.966) 0:00:51.169 ********** 2026-03-01 01:00:03.820257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.820262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.820266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.820270 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820275 | orchestrator | 2026-03-01 01:00:03.820279 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-01 01:00:03.820284 | orchestrator | Sunday 01 March 2026 00:50:17 +0000 (0:00:00.781) 0:00:51.951 ********** 2026-03-01 01:00:03.820288 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820293 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820297 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820302 | orchestrator | 2026-03-01 01:00:03.820306 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-01 01:00:03.820311 | orchestrator | Sunday 01 March 2026 00:50:17 +0000 (0:00:00.460) 0:00:52.411 ********** 2026-03-01 01:00:03.820316 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-01 01:00:03.820320 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-01 01:00:03.820338 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-01 01:00:03.820344 | orchestrator | 2026-03-01 01:00:03.820348 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-01 01:00:03.820353 | orchestrator | Sunday 01 March 2026 00:50:18 +0000 (0:00:00.852) 0:00:53.264 ********** 2026-03-01 01:00:03.820360 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:00:03.820365 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:00:03.820370 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:00:03.820374 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-01 01:00:03.820379 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-01 01:00:03.820383 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-01 01:00:03.820387 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-01 01:00:03.820392 | orchestrator | 2026-03-01 01:00:03.820396 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-01 01:00:03.820401 | orchestrator | Sunday 01 March 2026 00:50:19 +0000 (0:00:00.962) 0:00:54.226 ********** 2026-03-01 01:00:03.820407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:00:03.820414 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:00:03.820426 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:00:03.820433 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-01 01:00:03.820439 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-01 01:00:03.820445 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-01 01:00:03.820452 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-01 01:00:03.820459 | orchestrator | 2026-03-01 01:00:03.820466 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.820474 | orchestrator | Sunday 01 March 2026 00:50:21 +0000 (0:00:01.722) 0:00:55.948 ********** 2026-03-01 01:00:03.820478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.820483 | orchestrator | 2026-03-01 01:00:03.820486 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.820490 | orchestrator | Sunday 01 March 2026 00:50:22 +0000 (0:00:01.101) 0:00:57.050 ********** 2026-03-01 01:00:03.820494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.820498 | orchestrator | 2026-03-01 01:00:03.820501 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.820505 | orchestrator | Sunday 01 March 2026 00:50:23 +0000 (0:00:01.169) 0:00:58.220 ********** 2026-03-01 01:00:03.820509 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820513 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820516 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820520 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.820524 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.820530 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.820537 | orchestrator | 2026-03-01 01:00:03.820545 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.820555 | orchestrator | Sunday 01 March 2026 00:50:24 +0000 (0:00:00.974) 0:00:59.194 ********** 2026-03-01 01:00:03.820560 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820566 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820573 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820579 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820585 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820591 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820597 | orchestrator | 2026-03-01 01:00:03.820606 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.820611 | orchestrator | Sunday 01 March 2026 00:50:25 +0000 (0:00:01.026) 0:01:00.220 ********** 2026-03-01 01:00:03.820617 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820623 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820628 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820635 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820641 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820648 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820653 | orchestrator | 2026-03-01 01:00:03.820657 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.820660 | orchestrator | Sunday 01 March 2026 00:50:27 +0000 (0:00:01.812) 0:01:02.033 ********** 2026-03-01 01:00:03.820664 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820668 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820672 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820675 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820679 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820683 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820687 | orchestrator | 2026-03-01 01:00:03.820690 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.820694 | orchestrator | Sunday 01 March 2026 00:50:28 +0000 (0:00:00.913) 0:01:02.946 ********** 2026-03-01 01:00:03.820698 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820702 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820705 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820709 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.820713 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.820733 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.820737 | orchestrator | 2026-03-01 01:00:03.820741 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.820745 | orchestrator | Sunday 01 March 2026 00:50:29 +0000 (0:00:01.564) 0:01:04.511 ********** 2026-03-01 01:00:03.820749 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820752 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820756 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820760 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820764 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820767 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820771 | orchestrator | 2026-03-01 01:00:03.820775 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.820778 | orchestrator | Sunday 01 March 2026 00:50:30 +0000 (0:00:00.945) 0:01:05.456 ********** 2026-03-01 01:00:03.820782 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820786 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820789 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820793 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820797 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820815 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820819 | orchestrator | 2026-03-01 01:00:03.820822 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.820826 | orchestrator | Sunday 01 March 2026 00:50:31 +0000 (0:00:00.784) 0:01:06.241 ********** 2026-03-01 01:00:03.820830 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820834 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820837 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820841 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.820845 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.820849 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.820852 | orchestrator | 2026-03-01 01:00:03.820856 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.820860 | orchestrator | Sunday 01 March 2026 00:50:32 +0000 (0:00:01.147) 0:01:07.388 ********** 2026-03-01 01:00:03.820864 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820871 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820875 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820878 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.820882 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.820886 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.820889 | orchestrator | 2026-03-01 01:00:03.820893 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.820897 | orchestrator | Sunday 01 March 2026 00:50:34 +0000 (0:00:01.649) 0:01:09.038 ********** 2026-03-01 01:00:03.820903 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820907 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820911 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820915 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820918 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820922 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820926 | orchestrator | 2026-03-01 01:00:03.820930 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.820934 | orchestrator | Sunday 01 March 2026 00:50:34 +0000 (0:00:00.630) 0:01:09.669 ********** 2026-03-01 01:00:03.820937 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.820941 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.820945 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.820948 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.820952 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.820956 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.820960 | orchestrator | 2026-03-01 01:00:03.820964 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.820968 | orchestrator | Sunday 01 March 2026 00:50:35 +0000 (0:00:00.872) 0:01:10.542 ********** 2026-03-01 01:00:03.820972 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.820976 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.820979 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.820983 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.820987 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.820991 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.820994 | orchestrator | 2026-03-01 01:00:03.820998 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.821002 | orchestrator | Sunday 01 March 2026 00:50:36 +0000 (0:00:00.804) 0:01:11.346 ********** 2026-03-01 01:00:03.821006 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.821009 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.821013 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.821017 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821021 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821024 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821028 | orchestrator | 2026-03-01 01:00:03.821032 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.821036 | orchestrator | Sunday 01 March 2026 00:50:37 +0000 (0:00:01.354) 0:01:12.701 ********** 2026-03-01 01:00:03.821040 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.821043 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.821047 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.821054 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821060 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821066 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821072 | orchestrator | 2026-03-01 01:00:03.821078 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.821084 | orchestrator | Sunday 01 March 2026 00:50:39 +0000 (0:00:01.274) 0:01:13.976 ********** 2026-03-01 01:00:03.821090 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821097 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821108 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821114 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821119 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821133 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821139 | orchestrator | 2026-03-01 01:00:03.821145 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.821150 | orchestrator | Sunday 01 March 2026 00:50:40 +0000 (0:00:01.415) 0:01:15.391 ********** 2026-03-01 01:00:03.821157 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821162 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821168 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821174 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821201 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821207 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821213 | orchestrator | 2026-03-01 01:00:03.821219 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.821224 | orchestrator | Sunday 01 March 2026 00:50:41 +0000 (0:00:00.718) 0:01:16.109 ********** 2026-03-01 01:00:03.821231 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821236 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821240 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821244 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.821248 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.821251 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.821255 | orchestrator | 2026-03-01 01:00:03.821259 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.821263 | orchestrator | Sunday 01 March 2026 00:50:42 +0000 (0:00:00.921) 0:01:17.031 ********** 2026-03-01 01:00:03.821266 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.821270 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.821274 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.821278 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.821281 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.821285 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.821289 | orchestrator | 2026-03-01 01:00:03.821293 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.821297 | orchestrator | Sunday 01 March 2026 00:50:43 +0000 (0:00:00.786) 0:01:17.818 ********** 2026-03-01 01:00:03.821300 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.821304 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.821308 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.821312 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.821315 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.821319 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.821323 | orchestrator | 2026-03-01 01:00:03.821326 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-01 01:00:03.821330 | orchestrator | Sunday 01 March 2026 00:50:44 +0000 (0:00:01.431) 0:01:19.250 ********** 2026-03-01 01:00:03.821334 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.821338 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.821341 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.821345 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.821349 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.821352 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.821356 | orchestrator | 2026-03-01 01:00:03.821360 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-01 01:00:03.821367 | orchestrator | Sunday 01 March 2026 00:50:46 +0000 (0:00:01.513) 0:01:20.763 ********** 2026-03-01 01:00:03.821371 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.821375 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.821378 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.821382 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.821386 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.821390 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.821393 | orchestrator | 2026-03-01 01:00:03.821397 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-01 01:00:03.821401 | orchestrator | Sunday 01 March 2026 00:50:48 +0000 (0:00:02.609) 0:01:23.373 ********** 2026-03-01 01:00:03.821410 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.821417 | orchestrator | 2026-03-01 01:00:03.821422 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-01 01:00:03.821429 | orchestrator | Sunday 01 March 2026 00:50:49 +0000 (0:00:01.187) 0:01:24.560 ********** 2026-03-01 01:00:03.821435 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821441 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821447 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821453 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821459 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821465 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821471 | orchestrator | 2026-03-01 01:00:03.821478 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-01 01:00:03.821484 | orchestrator | Sunday 01 March 2026 00:50:50 +0000 (0:00:00.561) 0:01:25.122 ********** 2026-03-01 01:00:03.821490 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821499 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821506 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821512 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821518 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821524 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821529 | orchestrator | 2026-03-01 01:00:03.821535 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-01 01:00:03.821541 | orchestrator | Sunday 01 March 2026 00:50:51 +0000 (0:00:00.698) 0:01:25.821 ********** 2026-03-01 01:00:03.821548 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-01 01:00:03.821554 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-01 01:00:03.821560 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-01 01:00:03.821566 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-01 01:00:03.821573 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-01 01:00:03.821579 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-01 01:00:03.821586 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-01 01:00:03.821592 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-01 01:00:03.821595 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-01 01:00:03.821599 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-01 01:00:03.821630 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-01 01:00:03.821639 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-01 01:00:03.821645 | orchestrator | 2026-03-01 01:00:03.821650 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-01 01:00:03.821656 | orchestrator | Sunday 01 March 2026 00:50:52 +0000 (0:00:01.400) 0:01:27.222 ********** 2026-03-01 01:00:03.821662 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.821668 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.821674 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.821680 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.821685 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.821692 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.821698 | orchestrator | 2026-03-01 01:00:03.821705 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-01 01:00:03.821711 | orchestrator | Sunday 01 March 2026 00:50:53 +0000 (0:00:01.012) 0:01:28.235 ********** 2026-03-01 01:00:03.821723 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821730 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821736 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821742 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821751 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821759 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821765 | orchestrator | 2026-03-01 01:00:03.821771 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-01 01:00:03.821778 | orchestrator | Sunday 01 March 2026 00:50:54 +0000 (0:00:00.503) 0:01:28.738 ********** 2026-03-01 01:00:03.821784 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821791 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821797 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821816 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821822 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821827 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821833 | orchestrator | 2026-03-01 01:00:03.821838 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-01 01:00:03.821843 | orchestrator | Sunday 01 March 2026 00:50:54 +0000 (0:00:00.643) 0:01:29.382 ********** 2026-03-01 01:00:03.821849 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.821854 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.821859 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.821864 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.821874 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.821880 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.821886 | orchestrator | 2026-03-01 01:00:03.821892 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-01 01:00:03.821899 | orchestrator | Sunday 01 March 2026 00:50:55 +0000 (0:00:00.507) 0:01:29.889 ********** 2026-03-01 01:00:03.821905 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.821912 | orchestrator | 2026-03-01 01:00:03.821920 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-01 01:00:03.821929 | orchestrator | Sunday 01 March 2026 00:50:56 +0000 (0:00:01.001) 0:01:30.891 ********** 2026-03-01 01:00:03.821935 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.821941 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.821947 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.821953 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.821960 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.821966 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.821973 | orchestrator | 2026-03-01 01:00:03.821979 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-01 01:00:03.821985 | orchestrator | Sunday 01 March 2026 00:51:42 +0000 (0:00:46.183) 0:02:17.074 ********** 2026-03-01 01:00:03.821991 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-01 01:00:03.821998 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-01 01:00:03.822006 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-01 01:00:03.822048 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822056 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-01 01:00:03.822062 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-01 01:00:03.822068 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-01 01:00:03.822074 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822080 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-01 01:00:03.822087 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-01 01:00:03.822093 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-01 01:00:03.822105 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822111 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-01 01:00:03.822118 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-01 01:00:03.822124 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-01 01:00:03.822129 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822134 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-01 01:00:03.822140 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-01 01:00:03.822145 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-01 01:00:03.822151 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822186 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-01 01:00:03.822191 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-01 01:00:03.822195 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-01 01:00:03.822199 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822202 | orchestrator | 2026-03-01 01:00:03.822206 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-01 01:00:03.822210 | orchestrator | Sunday 01 March 2026 00:51:43 +0000 (0:00:00.732) 0:02:17.807 ********** 2026-03-01 01:00:03.822214 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822218 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822221 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822225 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822229 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822232 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822237 | orchestrator | 2026-03-01 01:00:03.822243 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-01 01:00:03.822254 | orchestrator | Sunday 01 March 2026 00:51:43 +0000 (0:00:00.898) 0:02:18.705 ********** 2026-03-01 01:00:03.822261 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822269 | orchestrator | 2026-03-01 01:00:03.822275 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-01 01:00:03.822281 | orchestrator | Sunday 01 March 2026 00:51:44 +0000 (0:00:00.215) 0:02:18.921 ********** 2026-03-01 01:00:03.822288 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822295 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822302 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822309 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822316 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822323 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822330 | orchestrator | 2026-03-01 01:00:03.822337 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-01 01:00:03.822344 | orchestrator | Sunday 01 March 2026 00:51:44 +0000 (0:00:00.606) 0:02:19.527 ********** 2026-03-01 01:00:03.822352 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822359 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822365 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822372 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822380 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822384 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822388 | orchestrator | 2026-03-01 01:00:03.822395 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-01 01:00:03.822399 | orchestrator | Sunday 01 March 2026 00:51:45 +0000 (0:00:00.790) 0:02:20.318 ********** 2026-03-01 01:00:03.822403 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822406 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822410 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822418 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822422 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822426 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822430 | orchestrator | 2026-03-01 01:00:03.822433 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-01 01:00:03.822437 | orchestrator | Sunday 01 March 2026 00:51:46 +0000 (0:00:00.663) 0:02:20.981 ********** 2026-03-01 01:00:03.822441 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.822445 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.822448 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.822452 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.822456 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.822460 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.822465 | orchestrator | 2026-03-01 01:00:03.822471 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-01 01:00:03.822477 | orchestrator | Sunday 01 March 2026 00:51:48 +0000 (0:00:02.409) 0:02:23.390 ********** 2026-03-01 01:00:03.822484 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.822489 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.822493 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.822497 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.822501 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.822504 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.822508 | orchestrator | 2026-03-01 01:00:03.822512 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-01 01:00:03.822515 | orchestrator | Sunday 01 March 2026 00:51:49 +0000 (0:00:00.446) 0:02:23.837 ********** 2026-03-01 01:00:03.822520 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.822524 | orchestrator | 2026-03-01 01:00:03.822528 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-01 01:00:03.822532 | orchestrator | Sunday 01 March 2026 00:51:50 +0000 (0:00:00.907) 0:02:24.744 ********** 2026-03-01 01:00:03.822536 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822539 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822543 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822547 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822550 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822555 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822558 | orchestrator | 2026-03-01 01:00:03.822562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-01 01:00:03.822566 | orchestrator | Sunday 01 March 2026 00:51:50 +0000 (0:00:00.809) 0:02:25.553 ********** 2026-03-01 01:00:03.822570 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822574 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822578 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822585 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822591 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822597 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822603 | orchestrator | 2026-03-01 01:00:03.822610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-01 01:00:03.822616 | orchestrator | Sunday 01 March 2026 00:51:51 +0000 (0:00:00.534) 0:02:26.088 ********** 2026-03-01 01:00:03.822623 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822630 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822663 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822672 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822679 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822686 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822692 | orchestrator | 2026-03-01 01:00:03.822698 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-01 01:00:03.822704 | orchestrator | Sunday 01 March 2026 00:51:52 +0000 (0:00:00.677) 0:02:26.765 ********** 2026-03-01 01:00:03.822715 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822721 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822727 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822733 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822739 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822745 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822751 | orchestrator | 2026-03-01 01:00:03.822758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-01 01:00:03.822762 | orchestrator | Sunday 01 March 2026 00:51:52 +0000 (0:00:00.561) 0:02:27.327 ********** 2026-03-01 01:00:03.822766 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822770 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822773 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822778 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822784 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822793 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822832 | orchestrator | 2026-03-01 01:00:03.822840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-01 01:00:03.822847 | orchestrator | Sunday 01 March 2026 00:51:53 +0000 (0:00:00.699) 0:02:28.026 ********** 2026-03-01 01:00:03.822853 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822860 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822866 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822872 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822878 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822885 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822891 | orchestrator | 2026-03-01 01:00:03.822897 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-01 01:00:03.822904 | orchestrator | Sunday 01 March 2026 00:51:53 +0000 (0:00:00.520) 0:02:28.546 ********** 2026-03-01 01:00:03.822910 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822916 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822923 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.822930 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.822937 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.822948 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.822955 | orchestrator | 2026-03-01 01:00:03.822964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-01 01:00:03.822974 | orchestrator | Sunday 01 March 2026 00:51:54 +0000 (0:00:00.732) 0:02:29.278 ********** 2026-03-01 01:00:03.822980 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.822986 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.822993 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.823000 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.823007 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.823014 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.823021 | orchestrator | 2026-03-01 01:00:03.823027 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-01 01:00:03.823034 | orchestrator | Sunday 01 March 2026 00:51:55 +0000 (0:00:00.520) 0:02:29.799 ********** 2026-03-01 01:00:03.823040 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.823047 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.823054 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.823060 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.823065 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.823071 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.823078 | orchestrator | 2026-03-01 01:00:03.823085 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-01 01:00:03.823091 | orchestrator | Sunday 01 March 2026 00:51:56 +0000 (0:00:01.286) 0:02:31.086 ********** 2026-03-01 01:00:03.823098 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.823106 | orchestrator | 2026-03-01 01:00:03.823118 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-01 01:00:03.823125 | orchestrator | Sunday 01 March 2026 00:51:57 +0000 (0:00:01.096) 0:02:32.182 ********** 2026-03-01 01:00:03.823132 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-01 01:00:03.823139 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-01 01:00:03.823146 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-01 01:00:03.823152 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-01 01:00:03.823158 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-01 01:00:03.823165 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-01 01:00:03.823171 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-01 01:00:03.823179 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-01 01:00:03.823186 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-01 01:00:03.823192 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-01 01:00:03.823199 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-01 01:00:03.823205 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-01 01:00:03.823213 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-01 01:00:03.823220 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-01 01:00:03.823228 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-01 01:00:03.823235 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-01 01:00:03.823242 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-01 01:00:03.823249 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-01 01:00:03.823292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-01 01:00:03.823301 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-01 01:00:03.823307 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-01 01:00:03.823314 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-01 01:00:03.823320 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-01 01:00:03.823326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-01 01:00:03.823333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-01 01:00:03.823338 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-01 01:00:03.823344 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-01 01:00:03.823350 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-01 01:00:03.823356 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-01 01:00:03.823362 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-01 01:00:03.823368 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-01 01:00:03.823374 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-01 01:00:03.823380 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-01 01:00:03.823386 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-01 01:00:03.823392 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-01 01:00:03.823399 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-01 01:00:03.823405 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-01 01:00:03.823412 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-01 01:00:03.823418 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-01 01:00:03.823425 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-01 01:00:03.823431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-01 01:00:03.823437 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-01 01:00:03.823453 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-01 01:00:03.823461 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-01 01:00:03.823472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-01 01:00:03.823478 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-01 01:00:03.823484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-01 01:00:03.823490 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-01 01:00:03.823497 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-01 01:00:03.823503 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-01 01:00:03.823509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-01 01:00:03.823515 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-01 01:00:03.823522 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-01 01:00:03.823527 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-01 01:00:03.823534 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-01 01:00:03.823540 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-01 01:00:03.823545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-01 01:00:03.823548 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-01 01:00:03.823552 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-01 01:00:03.823556 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-01 01:00:03.823559 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-01 01:00:03.823563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-01 01:00:03.823567 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-01 01:00:03.823570 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-01 01:00:03.823574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-01 01:00:03.823578 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-01 01:00:03.823581 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-01 01:00:03.823585 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-01 01:00:03.823589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-01 01:00:03.823592 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-01 01:00:03.823596 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-01 01:00:03.823600 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-01 01:00:03.823603 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-01 01:00:03.823607 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-01 01:00:03.823611 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-01 01:00:03.823615 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-01 01:00:03.823618 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-01 01:00:03.823642 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-01 01:00:03.823646 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-01 01:00:03.823650 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-01 01:00:03.823654 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-01 01:00:03.823658 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-01 01:00:03.823668 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-01 01:00:03.823672 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-01 01:00:03.823675 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-01 01:00:03.823679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-01 01:00:03.823683 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-01 01:00:03.823687 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-01 01:00:03.823690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-01 01:00:03.823694 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-01 01:00:03.823698 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-01 01:00:03.823702 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-01 01:00:03.823705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-01 01:00:03.823710 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-01 01:00:03.823718 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-01 01:00:03.823727 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-01 01:00:03.823734 | orchestrator | 2026-03-01 01:00:03.823741 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-01 01:00:03.823748 | orchestrator | Sunday 01 March 2026 00:52:05 +0000 (0:00:07.978) 0:02:40.161 ********** 2026-03-01 01:00:03.823755 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.823762 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.823769 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.823780 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.823788 | orchestrator | 2026-03-01 01:00:03.823795 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-01 01:00:03.823816 | orchestrator | Sunday 01 March 2026 00:52:06 +0000 (0:00:00.751) 0:02:40.912 ********** 2026-03-01 01:00:03.823822 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.823829 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.823839 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.823846 | orchestrator | 2026-03-01 01:00:03.823852 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-01 01:00:03.823858 | orchestrator | Sunday 01 March 2026 00:52:07 +0000 (0:00:00.903) 0:02:41.815 ********** 2026-03-01 01:00:03.823864 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.823871 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.823880 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.823888 | orchestrator | 2026-03-01 01:00:03.823893 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-01 01:00:03.823899 | orchestrator | Sunday 01 March 2026 00:52:08 +0000 (0:00:01.347) 0:02:43.163 ********** 2026-03-01 01:00:03.823905 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.823911 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.823916 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.823922 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.823928 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.823944 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.823951 | orchestrator | 2026-03-01 01:00:03.823957 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-01 01:00:03.823964 | orchestrator | Sunday 01 March 2026 00:52:09 +0000 (0:00:00.645) 0:02:43.808 ********** 2026-03-01 01:00:03.823968 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.823971 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.823975 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.823979 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.823983 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.823986 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.823990 | orchestrator | 2026-03-01 01:00:03.823994 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-01 01:00:03.823997 | orchestrator | Sunday 01 March 2026 00:52:09 +0000 (0:00:00.783) 0:02:44.592 ********** 2026-03-01 01:00:03.824001 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824005 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824009 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824012 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824016 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824020 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824023 | orchestrator | 2026-03-01 01:00:03.824049 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-01 01:00:03.824054 | orchestrator | Sunday 01 March 2026 00:52:10 +0000 (0:00:00.690) 0:02:45.283 ********** 2026-03-01 01:00:03.824058 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824061 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824065 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824069 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824072 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824076 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824080 | orchestrator | 2026-03-01 01:00:03.824084 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-01 01:00:03.824089 | orchestrator | Sunday 01 March 2026 00:52:11 +0000 (0:00:00.631) 0:02:45.915 ********** 2026-03-01 01:00:03.824098 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824106 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824113 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824120 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824128 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824134 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824142 | orchestrator | 2026-03-01 01:00:03.824149 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-01 01:00:03.824156 | orchestrator | Sunday 01 March 2026 00:52:11 +0000 (0:00:00.495) 0:02:46.411 ********** 2026-03-01 01:00:03.824163 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824170 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824177 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824183 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824191 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824198 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824205 | orchestrator | 2026-03-01 01:00:03.824213 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-01 01:00:03.824220 | orchestrator | Sunday 01 March 2026 00:52:12 +0000 (0:00:00.630) 0:02:47.041 ********** 2026-03-01 01:00:03.824227 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824234 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824240 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824246 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824252 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824259 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824266 | orchestrator | 2026-03-01 01:00:03.824272 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-01 01:00:03.824289 | orchestrator | Sunday 01 March 2026 00:52:12 +0000 (0:00:00.481) 0:02:47.522 ********** 2026-03-01 01:00:03.824296 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824302 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824309 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824316 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824323 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824328 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824332 | orchestrator | 2026-03-01 01:00:03.824335 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-01 01:00:03.824339 | orchestrator | Sunday 01 March 2026 00:52:13 +0000 (0:00:00.775) 0:02:48.298 ********** 2026-03-01 01:00:03.824343 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824347 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824350 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824354 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.824358 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.824362 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.824366 | orchestrator | 2026-03-01 01:00:03.824369 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-01 01:00:03.824373 | orchestrator | Sunday 01 March 2026 00:52:16 +0000 (0:00:03.007) 0:02:51.306 ********** 2026-03-01 01:00:03.824377 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.824381 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.824384 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.824388 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824392 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824395 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824399 | orchestrator | 2026-03-01 01:00:03.824403 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-01 01:00:03.824407 | orchestrator | Sunday 01 March 2026 00:52:17 +0000 (0:00:00.796) 0:02:52.102 ********** 2026-03-01 01:00:03.824410 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.824414 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.824418 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.824422 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824425 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824429 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824433 | orchestrator | 2026-03-01 01:00:03.824436 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-01 01:00:03.824440 | orchestrator | Sunday 01 March 2026 00:52:18 +0000 (0:00:00.836) 0:02:52.939 ********** 2026-03-01 01:00:03.824444 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824450 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824456 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824462 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824468 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824474 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824480 | orchestrator | 2026-03-01 01:00:03.824486 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-01 01:00:03.824492 | orchestrator | Sunday 01 March 2026 00:52:18 +0000 (0:00:00.788) 0:02:53.727 ********** 2026-03-01 01:00:03.824498 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.824505 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.824511 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.824516 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824552 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824560 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824572 | orchestrator | 2026-03-01 01:00:03.824578 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-01 01:00:03.824584 | orchestrator | Sunday 01 March 2026 00:52:19 +0000 (0:00:00.791) 0:02:54.519 ********** 2026-03-01 01:00:03.824592 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-01 01:00:03.824601 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-01 01:00:03.824609 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-01 01:00:03.824616 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824621 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-01 01:00:03.824625 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824632 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-01 01:00:03.824636 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-01 01:00:03.824640 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824644 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824647 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824651 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824655 | orchestrator | 2026-03-01 01:00:03.824658 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-01 01:00:03.824662 | orchestrator | Sunday 01 March 2026 00:52:20 +0000 (0:00:00.713) 0:02:55.232 ********** 2026-03-01 01:00:03.824666 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824670 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824673 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824677 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824681 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824684 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824688 | orchestrator | 2026-03-01 01:00:03.824692 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-01 01:00:03.824695 | orchestrator | Sunday 01 March 2026 00:52:21 +0000 (0:00:00.677) 0:02:55.910 ********** 2026-03-01 01:00:03.824699 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824703 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824707 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824710 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824714 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824718 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824721 | orchestrator | 2026-03-01 01:00:03.824725 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-01 01:00:03.824732 | orchestrator | Sunday 01 March 2026 00:52:22 +0000 (0:00:00.931) 0:02:56.841 ********** 2026-03-01 01:00:03.824736 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824739 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824745 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824751 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824757 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824764 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824770 | orchestrator | 2026-03-01 01:00:03.824777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-01 01:00:03.824783 | orchestrator | Sunday 01 March 2026 00:52:22 +0000 (0:00:00.566) 0:02:57.408 ********** 2026-03-01 01:00:03.824790 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824794 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824814 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824820 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824826 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824832 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824838 | orchestrator | 2026-03-01 01:00:03.824844 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-01 01:00:03.824874 | orchestrator | Sunday 01 March 2026 00:52:23 +0000 (0:00:00.897) 0:02:58.306 ********** 2026-03-01 01:00:03.824882 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824888 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.824894 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.824899 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824905 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824911 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824917 | orchestrator | 2026-03-01 01:00:03.824922 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-01 01:00:03.824928 | orchestrator | Sunday 01 March 2026 00:52:24 +0000 (0:00:00.880) 0:02:59.187 ********** 2026-03-01 01:00:03.824933 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.824939 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.824944 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.824950 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.824956 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.824961 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.824967 | orchestrator | 2026-03-01 01:00:03.824973 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-01 01:00:03.824980 | orchestrator | Sunday 01 March 2026 00:52:25 +0000 (0:00:00.807) 0:02:59.994 ********** 2026-03-01 01:00:03.824984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.824988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.824992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.824995 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.824999 | orchestrator | 2026-03-01 01:00:03.825003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-01 01:00:03.825007 | orchestrator | Sunday 01 March 2026 00:52:25 +0000 (0:00:00.417) 0:03:00.411 ********** 2026-03-01 01:00:03.825010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.825014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.825018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.825021 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825025 | orchestrator | 2026-03-01 01:00:03.825029 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-01 01:00:03.825032 | orchestrator | Sunday 01 March 2026 00:52:26 +0000 (0:00:00.391) 0:03:00.803 ********** 2026-03-01 01:00:03.825036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.825048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.825052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.825055 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825059 | orchestrator | 2026-03-01 01:00:03.825063 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-01 01:00:03.825067 | orchestrator | Sunday 01 March 2026 00:52:26 +0000 (0:00:00.404) 0:03:01.207 ********** 2026-03-01 01:00:03.825070 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.825074 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.825078 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.825081 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.825085 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.825089 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.825093 | orchestrator | 2026-03-01 01:00:03.825096 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-01 01:00:03.825100 | orchestrator | Sunday 01 March 2026 00:52:27 +0000 (0:00:00.854) 0:03:02.062 ********** 2026-03-01 01:00:03.825104 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-01 01:00:03.825108 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-01 01:00:03.825111 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-01 01:00:03.825115 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-01 01:00:03.825119 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.825122 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-01 01:00:03.825126 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.825130 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-01 01:00:03.825134 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.825137 | orchestrator | 2026-03-01 01:00:03.825141 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-01 01:00:03.825145 | orchestrator | Sunday 01 March 2026 00:52:29 +0000 (0:00:02.539) 0:03:04.602 ********** 2026-03-01 01:00:03.825149 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.825152 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.825156 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.825160 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.825163 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.825167 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.825171 | orchestrator | 2026-03-01 01:00:03.825174 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-01 01:00:03.825178 | orchestrator | Sunday 01 March 2026 00:52:32 +0000 (0:00:03.067) 0:03:07.670 ********** 2026-03-01 01:00:03.825182 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.825186 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.825189 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.825193 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.825197 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.825200 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.825204 | orchestrator | 2026-03-01 01:00:03.825208 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-01 01:00:03.825211 | orchestrator | Sunday 01 March 2026 00:52:34 +0000 (0:00:01.320) 0:03:08.990 ********** 2026-03-01 01:00:03.825215 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825219 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.825222 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.825227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.825231 | orchestrator | 2026-03-01 01:00:03.825235 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-01 01:00:03.825260 | orchestrator | Sunday 01 March 2026 00:52:35 +0000 (0:00:00.949) 0:03:09.939 ********** 2026-03-01 01:00:03.825267 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.825273 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.825280 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.825290 | orchestrator | 2026-03-01 01:00:03.825297 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-01 01:00:03.825303 | orchestrator | Sunday 01 March 2026 00:52:35 +0000 (0:00:00.403) 0:03:10.343 ********** 2026-03-01 01:00:03.825309 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.825315 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.825321 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.825329 | orchestrator | 2026-03-01 01:00:03.825335 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-01 01:00:03.825341 | orchestrator | Sunday 01 March 2026 00:52:37 +0000 (0:00:01.412) 0:03:11.756 ********** 2026-03-01 01:00:03.825351 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-01 01:00:03.825358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-01 01:00:03.825364 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-01 01:00:03.825369 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.825375 | orchestrator | 2026-03-01 01:00:03.825381 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-01 01:00:03.825387 | orchestrator | Sunday 01 March 2026 00:52:37 +0000 (0:00:00.588) 0:03:12.344 ********** 2026-03-01 01:00:03.825392 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.825397 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.825403 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.825409 | orchestrator | 2026-03-01 01:00:03.825415 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-01 01:00:03.825421 | orchestrator | Sunday 01 March 2026 00:52:37 +0000 (0:00:00.356) 0:03:12.701 ********** 2026-03-01 01:00:03.825428 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.825433 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.825437 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.825441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.825445 | orchestrator | 2026-03-01 01:00:03.825448 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-01 01:00:03.825452 | orchestrator | Sunday 01 March 2026 00:52:38 +0000 (0:00:00.867) 0:03:13.568 ********** 2026-03-01 01:00:03.825456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.825463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.825467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.825471 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825474 | orchestrator | 2026-03-01 01:00:03.825478 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-01 01:00:03.825482 | orchestrator | Sunday 01 March 2026 00:52:39 +0000 (0:00:00.355) 0:03:13.924 ********** 2026-03-01 01:00:03.825486 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825489 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.825493 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.825497 | orchestrator | 2026-03-01 01:00:03.825500 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-01 01:00:03.825504 | orchestrator | Sunday 01 March 2026 00:52:39 +0000 (0:00:00.380) 0:03:14.304 ********** 2026-03-01 01:00:03.825508 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825512 | orchestrator | 2026-03-01 01:00:03.825515 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-01 01:00:03.825519 | orchestrator | Sunday 01 March 2026 00:52:39 +0000 (0:00:00.241) 0:03:14.546 ********** 2026-03-01 01:00:03.825523 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825527 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.825530 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.825534 | orchestrator | 2026-03-01 01:00:03.825538 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-01 01:00:03.825542 | orchestrator | Sunday 01 March 2026 00:52:40 +0000 (0:00:00.285) 0:03:14.831 ********** 2026-03-01 01:00:03.825550 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825554 | orchestrator | 2026-03-01 01:00:03.825558 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-01 01:00:03.825561 | orchestrator | Sunday 01 March 2026 00:52:40 +0000 (0:00:00.206) 0:03:15.037 ********** 2026-03-01 01:00:03.825565 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825569 | orchestrator | 2026-03-01 01:00:03.825573 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-01 01:00:03.825576 | orchestrator | Sunday 01 March 2026 00:52:40 +0000 (0:00:00.212) 0:03:15.249 ********** 2026-03-01 01:00:03.825580 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825584 | orchestrator | 2026-03-01 01:00:03.825587 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-01 01:00:03.825591 | orchestrator | Sunday 01 March 2026 00:52:40 +0000 (0:00:00.254) 0:03:15.504 ********** 2026-03-01 01:00:03.825595 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825599 | orchestrator | 2026-03-01 01:00:03.825602 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-01 01:00:03.825606 | orchestrator | Sunday 01 March 2026 00:52:40 +0000 (0:00:00.197) 0:03:15.701 ********** 2026-03-01 01:00:03.825610 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825614 | orchestrator | 2026-03-01 01:00:03.825617 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-01 01:00:03.825621 | orchestrator | Sunday 01 March 2026 00:52:41 +0000 (0:00:00.195) 0:03:15.897 ********** 2026-03-01 01:00:03.825625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.825629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.825632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.825636 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825640 | orchestrator | 2026-03-01 01:00:03.825643 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-01 01:00:03.825666 | orchestrator | Sunday 01 March 2026 00:52:41 +0000 (0:00:00.367) 0:03:16.264 ********** 2026-03-01 01:00:03.825671 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825675 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.825678 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.825682 | orchestrator | 2026-03-01 01:00:03.825686 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-01 01:00:03.825690 | orchestrator | Sunday 01 March 2026 00:52:41 +0000 (0:00:00.271) 0:03:16.535 ********** 2026-03-01 01:00:03.825694 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825697 | orchestrator | 2026-03-01 01:00:03.825701 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-01 01:00:03.825705 | orchestrator | Sunday 01 March 2026 00:52:42 +0000 (0:00:00.207) 0:03:16.743 ********** 2026-03-01 01:00:03.825709 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825713 | orchestrator | 2026-03-01 01:00:03.825717 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-01 01:00:03.825723 | orchestrator | Sunday 01 March 2026 00:52:42 +0000 (0:00:00.175) 0:03:16.918 ********** 2026-03-01 01:00:03.825730 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.825740 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.825749 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.825754 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.825761 | orchestrator | 2026-03-01 01:00:03.825767 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-01 01:00:03.825774 | orchestrator | Sunday 01 March 2026 00:52:43 +0000 (0:00:00.880) 0:03:17.799 ********** 2026-03-01 01:00:03.825780 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.825787 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.825797 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.825836 | orchestrator | 2026-03-01 01:00:03.825843 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-01 01:00:03.825849 | orchestrator | Sunday 01 March 2026 00:52:43 +0000 (0:00:00.286) 0:03:18.086 ********** 2026-03-01 01:00:03.825855 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.825861 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.825866 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.825872 | orchestrator | 2026-03-01 01:00:03.825877 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-01 01:00:03.825883 | orchestrator | Sunday 01 March 2026 00:52:44 +0000 (0:00:01.232) 0:03:19.318 ********** 2026-03-01 01:00:03.825892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.825898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.825904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.825910 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.825916 | orchestrator | 2026-03-01 01:00:03.825922 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-01 01:00:03.825928 | orchestrator | Sunday 01 March 2026 00:52:45 +0000 (0:00:00.702) 0:03:20.020 ********** 2026-03-01 01:00:03.825934 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.825941 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.825947 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.825952 | orchestrator | 2026-03-01 01:00:03.825958 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-01 01:00:03.825964 | orchestrator | Sunday 01 March 2026 00:52:45 +0000 (0:00:00.578) 0:03:20.599 ********** 2026-03-01 01:00:03.825970 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.825976 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.825983 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.825989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.825996 | orchestrator | 2026-03-01 01:00:03.826002 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-01 01:00:03.826009 | orchestrator | Sunday 01 March 2026 00:52:46 +0000 (0:00:00.792) 0:03:21.391 ********** 2026-03-01 01:00:03.826044 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.826048 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.826052 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.826056 | orchestrator | 2026-03-01 01:00:03.826060 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-01 01:00:03.826063 | orchestrator | Sunday 01 March 2026 00:52:47 +0000 (0:00:00.541) 0:03:21.933 ********** 2026-03-01 01:00:03.826067 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.826071 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.826075 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.826079 | orchestrator | 2026-03-01 01:00:03.826082 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-01 01:00:03.826086 | orchestrator | Sunday 01 March 2026 00:52:48 +0000 (0:00:01.148) 0:03:23.082 ********** 2026-03-01 01:00:03.826090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.826094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.826098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.826101 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.826105 | orchestrator | 2026-03-01 01:00:03.826109 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-01 01:00:03.826113 | orchestrator | Sunday 01 March 2026 00:52:48 +0000 (0:00:00.564) 0:03:23.646 ********** 2026-03-01 01:00:03.826117 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.826120 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.826124 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.826128 | orchestrator | 2026-03-01 01:00:03.826132 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-01 01:00:03.826139 | orchestrator | Sunday 01 March 2026 00:52:49 +0000 (0:00:00.318) 0:03:23.964 ********** 2026-03-01 01:00:03.826143 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.826147 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.826154 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.826160 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826167 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826201 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826210 | orchestrator | 2026-03-01 01:00:03.826217 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-01 01:00:03.826223 | orchestrator | Sunday 01 March 2026 00:52:50 +0000 (0:00:00.801) 0:03:24.765 ********** 2026-03-01 01:00:03.826229 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.826235 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.826242 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.826249 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.826253 | orchestrator | 2026-03-01 01:00:03.826257 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-01 01:00:03.826260 | orchestrator | Sunday 01 March 2026 00:52:50 +0000 (0:00:00.835) 0:03:25.601 ********** 2026-03-01 01:00:03.826264 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826269 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826275 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826281 | orchestrator | 2026-03-01 01:00:03.826287 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-01 01:00:03.826293 | orchestrator | Sunday 01 March 2026 00:52:51 +0000 (0:00:00.593) 0:03:26.194 ********** 2026-03-01 01:00:03.826299 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.826306 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.826312 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.826319 | orchestrator | 2026-03-01 01:00:03.826325 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-01 01:00:03.826331 | orchestrator | Sunday 01 March 2026 00:52:52 +0000 (0:00:01.234) 0:03:27.429 ********** 2026-03-01 01:00:03.826335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-01 01:00:03.826338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-01 01:00:03.826342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-01 01:00:03.826346 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826350 | orchestrator | 2026-03-01 01:00:03.826353 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-01 01:00:03.826357 | orchestrator | Sunday 01 March 2026 00:52:53 +0000 (0:00:00.576) 0:03:28.006 ********** 2026-03-01 01:00:03.826361 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826365 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826368 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826372 | orchestrator | 2026-03-01 01:00:03.826379 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-01 01:00:03.826383 | orchestrator | 2026-03-01 01:00:03.826387 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.826391 | orchestrator | Sunday 01 March 2026 00:52:53 +0000 (0:00:00.695) 0:03:28.701 ********** 2026-03-01 01:00:03.826395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.826399 | orchestrator | 2026-03-01 01:00:03.826403 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.826407 | orchestrator | Sunday 01 March 2026 00:52:54 +0000 (0:00:00.504) 0:03:29.205 ********** 2026-03-01 01:00:03.826411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.826415 | orchestrator | 2026-03-01 01:00:03.826425 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.826429 | orchestrator | Sunday 01 March 2026 00:52:54 +0000 (0:00:00.456) 0:03:29.662 ********** 2026-03-01 01:00:03.826433 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826437 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826440 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826444 | orchestrator | 2026-03-01 01:00:03.826448 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.826454 | orchestrator | Sunday 01 March 2026 00:52:55 +0000 (0:00:00.957) 0:03:30.619 ********** 2026-03-01 01:00:03.826463 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826471 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826478 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826483 | orchestrator | 2026-03-01 01:00:03.826489 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.826495 | orchestrator | Sunday 01 March 2026 00:52:56 +0000 (0:00:00.296) 0:03:30.915 ********** 2026-03-01 01:00:03.826501 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826506 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826512 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826518 | orchestrator | 2026-03-01 01:00:03.826524 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.826530 | orchestrator | Sunday 01 March 2026 00:52:56 +0000 (0:00:00.294) 0:03:31.210 ********** 2026-03-01 01:00:03.826536 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826543 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826549 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826555 | orchestrator | 2026-03-01 01:00:03.826561 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.826567 | orchestrator | Sunday 01 March 2026 00:52:56 +0000 (0:00:00.286) 0:03:31.496 ********** 2026-03-01 01:00:03.826573 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826579 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826585 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826591 | orchestrator | 2026-03-01 01:00:03.826598 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.826605 | orchestrator | Sunday 01 March 2026 00:52:57 +0000 (0:00:01.046) 0:03:32.543 ********** 2026-03-01 01:00:03.826611 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826618 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826624 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826630 | orchestrator | 2026-03-01 01:00:03.826637 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.826642 | orchestrator | Sunday 01 March 2026 00:52:58 +0000 (0:00:00.306) 0:03:32.849 ********** 2026-03-01 01:00:03.826668 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826673 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826676 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826680 | orchestrator | 2026-03-01 01:00:03.826684 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.826688 | orchestrator | Sunday 01 March 2026 00:52:58 +0000 (0:00:00.289) 0:03:33.139 ********** 2026-03-01 01:00:03.826692 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826695 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826699 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826703 | orchestrator | 2026-03-01 01:00:03.826707 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.826711 | orchestrator | Sunday 01 March 2026 00:52:59 +0000 (0:00:00.819) 0:03:33.958 ********** 2026-03-01 01:00:03.826715 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826718 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826722 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826726 | orchestrator | 2026-03-01 01:00:03.826730 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.826738 | orchestrator | Sunday 01 March 2026 00:53:00 +0000 (0:00:00.959) 0:03:34.918 ********** 2026-03-01 01:00:03.826742 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826745 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826749 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826753 | orchestrator | 2026-03-01 01:00:03.826757 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.826760 | orchestrator | Sunday 01 March 2026 00:53:00 +0000 (0:00:00.315) 0:03:35.234 ********** 2026-03-01 01:00:03.826767 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.826773 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.826780 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.826787 | orchestrator | 2026-03-01 01:00:03.826794 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.826813 | orchestrator | Sunday 01 March 2026 00:53:00 +0000 (0:00:00.314) 0:03:35.548 ********** 2026-03-01 01:00:03.826819 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826825 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826831 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826837 | orchestrator | 2026-03-01 01:00:03.826843 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.826850 | orchestrator | Sunday 01 March 2026 00:53:01 +0000 (0:00:00.284) 0:03:35.832 ********** 2026-03-01 01:00:03.826856 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826863 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826873 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826879 | orchestrator | 2026-03-01 01:00:03.826885 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.826891 | orchestrator | Sunday 01 March 2026 00:53:01 +0000 (0:00:00.427) 0:03:36.260 ********** 2026-03-01 01:00:03.826895 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826902 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826908 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826914 | orchestrator | 2026-03-01 01:00:03.826921 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.826927 | orchestrator | Sunday 01 March 2026 00:53:01 +0000 (0:00:00.307) 0:03:36.567 ********** 2026-03-01 01:00:03.826934 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826941 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826947 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826953 | orchestrator | 2026-03-01 01:00:03.826960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.826966 | orchestrator | Sunday 01 March 2026 00:53:02 +0000 (0:00:00.278) 0:03:36.846 ********** 2026-03-01 01:00:03.826973 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.826979 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.826985 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.826991 | orchestrator | 2026-03-01 01:00:03.826997 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.827003 | orchestrator | Sunday 01 March 2026 00:53:02 +0000 (0:00:00.314) 0:03:37.161 ********** 2026-03-01 01:00:03.827009 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827015 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827022 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827028 | orchestrator | 2026-03-01 01:00:03.827037 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.827045 | orchestrator | Sunday 01 March 2026 00:53:02 +0000 (0:00:00.282) 0:03:37.443 ********** 2026-03-01 01:00:03.827051 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827056 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827062 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827067 | orchestrator | 2026-03-01 01:00:03.827074 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.827080 | orchestrator | Sunday 01 March 2026 00:53:03 +0000 (0:00:00.463) 0:03:37.907 ********** 2026-03-01 01:00:03.827092 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827099 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827105 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827111 | orchestrator | 2026-03-01 01:00:03.827117 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-01 01:00:03.827124 | orchestrator | Sunday 01 March 2026 00:53:03 +0000 (0:00:00.491) 0:03:38.399 ********** 2026-03-01 01:00:03.827130 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827136 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827142 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827149 | orchestrator | 2026-03-01 01:00:03.827155 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-01 01:00:03.827161 | orchestrator | Sunday 01 March 2026 00:53:03 +0000 (0:00:00.313) 0:03:38.712 ********** 2026-03-01 01:00:03.827168 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.827173 | orchestrator | 2026-03-01 01:00:03.827177 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-01 01:00:03.827181 | orchestrator | Sunday 01 March 2026 00:53:04 +0000 (0:00:00.732) 0:03:39.445 ********** 2026-03-01 01:00:03.827185 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.827188 | orchestrator | 2026-03-01 01:00:03.827213 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-01 01:00:03.827218 | orchestrator | Sunday 01 March 2026 00:53:04 +0000 (0:00:00.151) 0:03:39.597 ********** 2026-03-01 01:00:03.827221 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-01 01:00:03.827225 | orchestrator | 2026-03-01 01:00:03.827229 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-01 01:00:03.827233 | orchestrator | Sunday 01 March 2026 00:53:05 +0000 (0:00:01.091) 0:03:40.689 ********** 2026-03-01 01:00:03.827237 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827241 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827244 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827248 | orchestrator | 2026-03-01 01:00:03.827252 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-01 01:00:03.827256 | orchestrator | Sunday 01 March 2026 00:53:06 +0000 (0:00:00.290) 0:03:40.980 ********** 2026-03-01 01:00:03.827260 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827264 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827267 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827273 | orchestrator | 2026-03-01 01:00:03.827280 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-01 01:00:03.827287 | orchestrator | Sunday 01 March 2026 00:53:06 +0000 (0:00:00.487) 0:03:41.468 ********** 2026-03-01 01:00:03.827294 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.827301 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827308 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.827315 | orchestrator | 2026-03-01 01:00:03.827322 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-01 01:00:03.827329 | orchestrator | Sunday 01 March 2026 00:53:07 +0000 (0:00:01.129) 0:03:42.597 ********** 2026-03-01 01:00:03.827336 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827344 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.827350 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.827357 | orchestrator | 2026-03-01 01:00:03.827363 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-01 01:00:03.827370 | orchestrator | Sunday 01 March 2026 00:53:08 +0000 (0:00:00.689) 0:03:43.287 ********** 2026-03-01 01:00:03.827377 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827384 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.827391 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.827398 | orchestrator | 2026-03-01 01:00:03.827405 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-01 01:00:03.827412 | orchestrator | Sunday 01 March 2026 00:53:09 +0000 (0:00:00.749) 0:03:44.036 ********** 2026-03-01 01:00:03.827427 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827435 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827441 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827448 | orchestrator | 2026-03-01 01:00:03.827455 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-01 01:00:03.827461 | orchestrator | Sunday 01 March 2026 00:53:09 +0000 (0:00:00.628) 0:03:44.665 ********** 2026-03-01 01:00:03.827466 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827473 | orchestrator | 2026-03-01 01:00:03.827479 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-01 01:00:03.827486 | orchestrator | Sunday 01 March 2026 00:53:11 +0000 (0:00:01.514) 0:03:46.180 ********** 2026-03-01 01:00:03.827492 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827499 | orchestrator | 2026-03-01 01:00:03.827505 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-01 01:00:03.827511 | orchestrator | Sunday 01 March 2026 00:53:12 +0000 (0:00:00.685) 0:03:46.865 ********** 2026-03-01 01:00:03.827518 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:00:03.827524 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.827530 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.827537 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:00:03.827543 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:00:03.827549 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-01 01:00:03.827555 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:00:03.827561 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-01 01:00:03.827567 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-01 01:00:03.827573 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-01 01:00:03.827579 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:00:03.827587 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-01 01:00:03.827595 | orchestrator | 2026-03-01 01:00:03.827603 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-01 01:00:03.827612 | orchestrator | Sunday 01 March 2026 00:53:15 +0000 (0:00:02.895) 0:03:49.761 ********** 2026-03-01 01:00:03.827621 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827629 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.827638 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.827647 | orchestrator | 2026-03-01 01:00:03.827654 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-01 01:00:03.827661 | orchestrator | Sunday 01 March 2026 00:53:16 +0000 (0:00:01.538) 0:03:51.299 ********** 2026-03-01 01:00:03.827666 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827673 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827679 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827686 | orchestrator | 2026-03-01 01:00:03.827692 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-01 01:00:03.827698 | orchestrator | Sunday 01 March 2026 00:53:16 +0000 (0:00:00.306) 0:03:51.606 ********** 2026-03-01 01:00:03.827705 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.827711 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.827717 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.827723 | orchestrator | 2026-03-01 01:00:03.827729 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-01 01:00:03.827736 | orchestrator | Sunday 01 March 2026 00:53:17 +0000 (0:00:00.489) 0:03:52.096 ********** 2026-03-01 01:00:03.827742 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827776 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.827783 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.827790 | orchestrator | 2026-03-01 01:00:03.827796 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-01 01:00:03.827841 | orchestrator | Sunday 01 March 2026 00:53:19 +0000 (0:00:01.654) 0:03:53.750 ********** 2026-03-01 01:00:03.827848 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.827854 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.827860 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.827867 | orchestrator | 2026-03-01 01:00:03.827873 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-01 01:00:03.827879 | orchestrator | Sunday 01 March 2026 00:53:20 +0000 (0:00:01.467) 0:03:55.218 ********** 2026-03-01 01:00:03.827886 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.827892 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.827898 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.827904 | orchestrator | 2026-03-01 01:00:03.827911 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-01 01:00:03.827917 | orchestrator | Sunday 01 March 2026 00:53:20 +0000 (0:00:00.336) 0:03:55.554 ********** 2026-03-01 01:00:03.827923 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.827929 | orchestrator | 2026-03-01 01:00:03.827936 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-01 01:00:03.827942 | orchestrator | Sunday 01 March 2026 00:53:21 +0000 (0:00:00.637) 0:03:56.192 ********** 2026-03-01 01:00:03.827948 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.827955 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.827961 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.827968 | orchestrator | 2026-03-01 01:00:03.827972 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-01 01:00:03.827976 | orchestrator | Sunday 01 March 2026 00:53:21 +0000 (0:00:00.285) 0:03:56.477 ********** 2026-03-01 01:00:03.827979 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.827983 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.827987 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.827990 | orchestrator | 2026-03-01 01:00:03.827994 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-01 01:00:03.827998 | orchestrator | Sunday 01 March 2026 00:53:22 +0000 (0:00:00.282) 0:03:56.760 ********** 2026-03-01 01:00:03.828005 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.828010 | orchestrator | 2026-03-01 01:00:03.828014 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-01 01:00:03.828017 | orchestrator | Sunday 01 March 2026 00:53:22 +0000 (0:00:00.585) 0:03:57.345 ********** 2026-03-01 01:00:03.828021 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.828025 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.828028 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.828032 | orchestrator | 2026-03-01 01:00:03.828036 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-01 01:00:03.828040 | orchestrator | Sunday 01 March 2026 00:53:24 +0000 (0:00:01.577) 0:03:58.923 ********** 2026-03-01 01:00:03.828043 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.828047 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.828051 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.828055 | orchestrator | 2026-03-01 01:00:03.828058 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-01 01:00:03.828062 | orchestrator | Sunday 01 March 2026 00:53:25 +0000 (0:00:01.386) 0:04:00.310 ********** 2026-03-01 01:00:03.828066 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.828070 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.828073 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.828077 | orchestrator | 2026-03-01 01:00:03.828081 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-01 01:00:03.828084 | orchestrator | Sunday 01 March 2026 00:53:27 +0000 (0:00:01.945) 0:04:02.256 ********** 2026-03-01 01:00:03.828091 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.828095 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.828099 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.828102 | orchestrator | 2026-03-01 01:00:03.828108 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-01 01:00:03.828114 | orchestrator | Sunday 01 March 2026 00:53:29 +0000 (0:00:02.290) 0:04:04.546 ********** 2026-03-01 01:00:03.828123 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.828130 | orchestrator | 2026-03-01 01:00:03.828136 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-01 01:00:03.828142 | orchestrator | Sunday 01 March 2026 00:53:30 +0000 (0:00:00.630) 0:04:05.176 ********** 2026-03-01 01:00:03.828147 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-01 01:00:03.828153 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828159 | orchestrator | 2026-03-01 01:00:03.828165 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-01 01:00:03.828171 | orchestrator | Sunday 01 March 2026 00:53:52 +0000 (0:00:22.199) 0:04:27.376 ********** 2026-03-01 01:00:03.828177 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828182 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828188 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828193 | orchestrator | 2026-03-01 01:00:03.828200 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-01 01:00:03.828206 | orchestrator | Sunday 01 March 2026 00:54:02 +0000 (0:00:09.781) 0:04:37.158 ********** 2026-03-01 01:00:03.828212 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828218 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828224 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828230 | orchestrator | 2026-03-01 01:00:03.828237 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-01 01:00:03.828267 | orchestrator | Sunday 01 March 2026 00:54:02 +0000 (0:00:00.525) 0:04:37.683 ********** 2026-03-01 01:00:03.828276 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-01 01:00:03.828281 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-01 01:00:03.828286 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-01 01:00:03.828291 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-01 01:00:03.828298 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-01 01:00:03.828307 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e1eddf486fecca85028254a12671c4597fd27d04'}])  2026-03-01 01:00:03.828312 | orchestrator | 2026-03-01 01:00:03.828316 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-01 01:00:03.828322 | orchestrator | Sunday 01 March 2026 00:54:17 +0000 (0:00:14.881) 0:04:52.565 ********** 2026-03-01 01:00:03.828328 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828334 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828339 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828345 | orchestrator | 2026-03-01 01:00:03.828352 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-01 01:00:03.828359 | orchestrator | Sunday 01 March 2026 00:54:18 +0000 (0:00:00.362) 0:04:52.927 ********** 2026-03-01 01:00:03.828363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.828367 | orchestrator | 2026-03-01 01:00:03.828371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-01 01:00:03.828374 | orchestrator | Sunday 01 March 2026 00:54:19 +0000 (0:00:00.819) 0:04:53.747 ********** 2026-03-01 01:00:03.828378 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828382 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828386 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828389 | orchestrator | 2026-03-01 01:00:03.828393 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-01 01:00:03.828397 | orchestrator | Sunday 01 March 2026 00:54:19 +0000 (0:00:00.346) 0:04:54.093 ********** 2026-03-01 01:00:03.828401 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828404 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828408 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828412 | orchestrator | 2026-03-01 01:00:03.828416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-01 01:00:03.828420 | orchestrator | Sunday 01 March 2026 00:54:19 +0000 (0:00:00.344) 0:04:54.438 ********** 2026-03-01 01:00:03.828423 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-01 01:00:03.828427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-01 01:00:03.828431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-01 01:00:03.828435 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828439 | orchestrator | 2026-03-01 01:00:03.828442 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-01 01:00:03.828446 | orchestrator | Sunday 01 March 2026 00:54:20 +0000 (0:00:01.179) 0:04:55.618 ********** 2026-03-01 01:00:03.828450 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828454 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828471 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828476 | orchestrator | 2026-03-01 01:00:03.828482 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-01 01:00:03.828489 | orchestrator | 2026-03-01 01:00:03.828499 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.828505 | orchestrator | Sunday 01 March 2026 00:54:21 +0000 (0:00:00.591) 0:04:56.210 ********** 2026-03-01 01:00:03.828511 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.828518 | orchestrator | 2026-03-01 01:00:03.828524 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.828529 | orchestrator | Sunday 01 March 2026 00:54:22 +0000 (0:00:00.555) 0:04:56.765 ********** 2026-03-01 01:00:03.828539 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.828546 | orchestrator | 2026-03-01 01:00:03.828552 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.828558 | orchestrator | Sunday 01 March 2026 00:54:22 +0000 (0:00:00.807) 0:04:57.573 ********** 2026-03-01 01:00:03.828563 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828569 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828574 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828580 | orchestrator | 2026-03-01 01:00:03.828585 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.828591 | orchestrator | Sunday 01 March 2026 00:54:23 +0000 (0:00:00.891) 0:04:58.464 ********** 2026-03-01 01:00:03.828596 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828602 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828608 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828613 | orchestrator | 2026-03-01 01:00:03.828620 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.828626 | orchestrator | Sunday 01 March 2026 00:54:24 +0000 (0:00:00.341) 0:04:58.806 ********** 2026-03-01 01:00:03.828631 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828638 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828644 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828650 | orchestrator | 2026-03-01 01:00:03.828656 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.828666 | orchestrator | Sunday 01 March 2026 00:54:24 +0000 (0:00:00.553) 0:04:59.359 ********** 2026-03-01 01:00:03.828673 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828679 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828685 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828691 | orchestrator | 2026-03-01 01:00:03.828697 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.828704 | orchestrator | Sunday 01 March 2026 00:54:24 +0000 (0:00:00.315) 0:04:59.675 ********** 2026-03-01 01:00:03.828710 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828716 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828723 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828729 | orchestrator | 2026-03-01 01:00:03.828735 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.828742 | orchestrator | Sunday 01 March 2026 00:54:25 +0000 (0:00:00.721) 0:05:00.396 ********** 2026-03-01 01:00:03.828747 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828753 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828758 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828764 | orchestrator | 2026-03-01 01:00:03.828770 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.828776 | orchestrator | Sunday 01 March 2026 00:54:25 +0000 (0:00:00.319) 0:05:00.716 ********** 2026-03-01 01:00:03.828782 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828788 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828794 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828812 | orchestrator | 2026-03-01 01:00:03.828818 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.828824 | orchestrator | Sunday 01 March 2026 00:54:26 +0000 (0:00:00.539) 0:05:01.255 ********** 2026-03-01 01:00:03.828830 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828836 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828842 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828848 | orchestrator | 2026-03-01 01:00:03.828854 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.828861 | orchestrator | Sunday 01 March 2026 00:54:27 +0000 (0:00:00.784) 0:05:02.039 ********** 2026-03-01 01:00:03.828867 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828879 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828883 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828887 | orchestrator | 2026-03-01 01:00:03.828890 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.828894 | orchestrator | Sunday 01 March 2026 00:54:28 +0000 (0:00:00.802) 0:05:02.842 ********** 2026-03-01 01:00:03.828898 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828902 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828905 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828909 | orchestrator | 2026-03-01 01:00:03.828913 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.828917 | orchestrator | Sunday 01 March 2026 00:54:28 +0000 (0:00:00.284) 0:05:03.127 ********** 2026-03-01 01:00:03.828921 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.828924 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.828928 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.828932 | orchestrator | 2026-03-01 01:00:03.828936 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.828939 | orchestrator | Sunday 01 March 2026 00:54:28 +0000 (0:00:00.582) 0:05:03.709 ********** 2026-03-01 01:00:03.828943 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828947 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.828950 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.828954 | orchestrator | 2026-03-01 01:00:03.828958 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.828982 | orchestrator | Sunday 01 March 2026 00:54:29 +0000 (0:00:00.327) 0:05:04.037 ********** 2026-03-01 01:00:03.828988 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.828994 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829001 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829008 | orchestrator | 2026-03-01 01:00:03.829014 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.829021 | orchestrator | Sunday 01 March 2026 00:54:29 +0000 (0:00:00.285) 0:05:04.322 ********** 2026-03-01 01:00:03.829029 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829039 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829045 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829052 | orchestrator | 2026-03-01 01:00:03.829059 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.829066 | orchestrator | Sunday 01 March 2026 00:54:29 +0000 (0:00:00.266) 0:05:04.588 ********** 2026-03-01 01:00:03.829073 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829080 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829087 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829093 | orchestrator | 2026-03-01 01:00:03.829097 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.829101 | orchestrator | Sunday 01 March 2026 00:54:30 +0000 (0:00:00.273) 0:05:04.862 ********** 2026-03-01 01:00:03.829104 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829108 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829112 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829115 | orchestrator | 2026-03-01 01:00:03.829119 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.829123 | orchestrator | Sunday 01 March 2026 00:54:30 +0000 (0:00:00.466) 0:05:05.328 ********** 2026-03-01 01:00:03.829130 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.829136 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.829142 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.829149 | orchestrator | 2026-03-01 01:00:03.829155 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.829162 | orchestrator | Sunday 01 March 2026 00:54:30 +0000 (0:00:00.289) 0:05:05.618 ********** 2026-03-01 01:00:03.829168 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.829175 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.829180 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.829190 | orchestrator | 2026-03-01 01:00:03.829193 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.829197 | orchestrator | Sunday 01 March 2026 00:54:31 +0000 (0:00:00.284) 0:05:05.903 ********** 2026-03-01 01:00:03.829201 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.829212 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.829219 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.829225 | orchestrator | 2026-03-01 01:00:03.829231 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-01 01:00:03.829238 | orchestrator | Sunday 01 March 2026 00:54:31 +0000 (0:00:00.626) 0:05:06.530 ********** 2026-03-01 01:00:03.829244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-01 01:00:03.829251 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:00:03.829258 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:00:03.829264 | orchestrator | 2026-03-01 01:00:03.829270 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-01 01:00:03.829276 | orchestrator | Sunday 01 March 2026 00:54:32 +0000 (0:00:00.546) 0:05:07.076 ********** 2026-03-01 01:00:03.829284 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.829290 | orchestrator | 2026-03-01 01:00:03.829297 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-01 01:00:03.829303 | orchestrator | Sunday 01 March 2026 00:54:32 +0000 (0:00:00.462) 0:05:07.539 ********** 2026-03-01 01:00:03.829310 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.829316 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.829322 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.829330 | orchestrator | 2026-03-01 01:00:03.829336 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-01 01:00:03.829342 | orchestrator | Sunday 01 March 2026 00:54:33 +0000 (0:00:00.703) 0:05:08.242 ********** 2026-03-01 01:00:03.829349 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829355 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829361 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829368 | orchestrator | 2026-03-01 01:00:03.829374 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-01 01:00:03.829381 | orchestrator | Sunday 01 March 2026 00:54:33 +0000 (0:00:00.430) 0:05:08.672 ********** 2026-03-01 01:00:03.829387 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:00:03.829394 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:00:03.829400 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:00:03.829407 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-01 01:00:03.829413 | orchestrator | 2026-03-01 01:00:03.829418 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-01 01:00:03.829424 | orchestrator | Sunday 01 March 2026 00:54:43 +0000 (0:00:09.158) 0:05:17.831 ********** 2026-03-01 01:00:03.829430 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.829435 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.829441 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.829447 | orchestrator | 2026-03-01 01:00:03.829453 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-01 01:00:03.829459 | orchestrator | Sunday 01 March 2026 00:54:43 +0000 (0:00:00.366) 0:05:18.197 ********** 2026-03-01 01:00:03.829465 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-01 01:00:03.829471 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-01 01:00:03.829480 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-01 01:00:03.829489 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-01 01:00:03.829497 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.829531 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.829543 | orchestrator | 2026-03-01 01:00:03.829550 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-01 01:00:03.829556 | orchestrator | Sunday 01 March 2026 00:54:45 +0000 (0:00:02.063) 0:05:20.261 ********** 2026-03-01 01:00:03.829562 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-01 01:00:03.829568 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-01 01:00:03.829575 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-01 01:00:03.829581 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:00:03.829589 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-01 01:00:03.829596 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-01 01:00:03.829601 | orchestrator | 2026-03-01 01:00:03.829608 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-01 01:00:03.829615 | orchestrator | Sunday 01 March 2026 00:54:46 +0000 (0:00:01.280) 0:05:21.541 ********** 2026-03-01 01:00:03.829624 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.829633 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.829642 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.829651 | orchestrator | 2026-03-01 01:00:03.829657 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-01 01:00:03.829664 | orchestrator | Sunday 01 March 2026 00:54:47 +0000 (0:00:01.079) 0:05:22.621 ********** 2026-03-01 01:00:03.829669 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829676 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829683 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829688 | orchestrator | 2026-03-01 01:00:03.829695 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-01 01:00:03.829701 | orchestrator | Sunday 01 March 2026 00:54:48 +0000 (0:00:00.320) 0:05:22.941 ********** 2026-03-01 01:00:03.829707 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829714 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829720 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829726 | orchestrator | 2026-03-01 01:00:03.829731 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-01 01:00:03.829737 | orchestrator | Sunday 01 March 2026 00:54:48 +0000 (0:00:00.307) 0:05:23.248 ********** 2026-03-01 01:00:03.829744 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.829750 | orchestrator | 2026-03-01 01:00:03.829761 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-01 01:00:03.829767 | orchestrator | Sunday 01 March 2026 00:54:49 +0000 (0:00:00.757) 0:05:24.006 ********** 2026-03-01 01:00:03.829774 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829780 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829786 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829793 | orchestrator | 2026-03-01 01:00:03.829829 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-01 01:00:03.829836 | orchestrator | Sunday 01 March 2026 00:54:49 +0000 (0:00:00.358) 0:05:24.365 ********** 2026-03-01 01:00:03.829843 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.829848 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.829854 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.829860 | orchestrator | 2026-03-01 01:00:03.829866 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-01 01:00:03.829872 | orchestrator | Sunday 01 March 2026 00:54:49 +0000 (0:00:00.338) 0:05:24.703 ********** 2026-03-01 01:00:03.829878 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.829885 | orchestrator | 2026-03-01 01:00:03.829890 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-01 01:00:03.829896 | orchestrator | Sunday 01 March 2026 00:54:50 +0000 (0:00:00.788) 0:05:25.492 ********** 2026-03-01 01:00:03.829908 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.829914 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.829921 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.829927 | orchestrator | 2026-03-01 01:00:03.829933 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-01 01:00:03.829940 | orchestrator | Sunday 01 March 2026 00:54:52 +0000 (0:00:01.279) 0:05:26.772 ********** 2026-03-01 01:00:03.829946 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.829952 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.829958 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.829965 | orchestrator | 2026-03-01 01:00:03.829971 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-01 01:00:03.829977 | orchestrator | Sunday 01 March 2026 00:54:53 +0000 (0:00:01.363) 0:05:28.135 ********** 2026-03-01 01:00:03.829984 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.829990 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.829997 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.830003 | orchestrator | 2026-03-01 01:00:03.830009 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-01 01:00:03.830047 | orchestrator | Sunday 01 March 2026 00:54:55 +0000 (0:00:02.034) 0:05:30.169 ********** 2026-03-01 01:00:03.830054 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.830060 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.830066 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.830073 | orchestrator | 2026-03-01 01:00:03.830080 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-01 01:00:03.830086 | orchestrator | Sunday 01 March 2026 00:54:57 +0000 (0:00:02.373) 0:05:32.543 ********** 2026-03-01 01:00:03.830093 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.830099 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.830105 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-01 01:00:03.830112 | orchestrator | 2026-03-01 01:00:03.830118 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-01 01:00:03.830124 | orchestrator | Sunday 01 March 2026 00:54:58 +0000 (0:00:00.444) 0:05:32.988 ********** 2026-03-01 01:00:03.830159 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-01 01:00:03.830164 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-01 01:00:03.830168 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-01 01:00:03.830172 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-01 01:00:03.830176 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.830180 | orchestrator | 2026-03-01 01:00:03.830184 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-01 01:00:03.830187 | orchestrator | Sunday 01 March 2026 00:55:22 +0000 (0:00:24.341) 0:05:57.330 ********** 2026-03-01 01:00:03.830191 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.830195 | orchestrator | 2026-03-01 01:00:03.830199 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-01 01:00:03.830203 | orchestrator | Sunday 01 March 2026 00:55:24 +0000 (0:00:01.428) 0:05:58.758 ********** 2026-03-01 01:00:03.830207 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.830211 | orchestrator | 2026-03-01 01:00:03.830214 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-01 01:00:03.830218 | orchestrator | Sunday 01 March 2026 00:55:24 +0000 (0:00:00.356) 0:05:59.115 ********** 2026-03-01 01:00:03.830222 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.830226 | orchestrator | 2026-03-01 01:00:03.830229 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-01 01:00:03.830233 | orchestrator | Sunday 01 March 2026 00:55:24 +0000 (0:00:00.128) 0:05:59.243 ********** 2026-03-01 01:00:03.830244 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-01 01:00:03.830250 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-01 01:00:03.830257 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-01 01:00:03.830263 | orchestrator | 2026-03-01 01:00:03.830270 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-01 01:00:03.830276 | orchestrator | Sunday 01 March 2026 00:55:31 +0000 (0:00:06.903) 0:06:06.146 ********** 2026-03-01 01:00:03.830286 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-01 01:00:03.830292 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-01 01:00:03.830299 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-01 01:00:03.830306 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-01 01:00:03.830313 | orchestrator | 2026-03-01 01:00:03.830319 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-01 01:00:03.830326 | orchestrator | Sunday 01 March 2026 00:55:36 +0000 (0:00:04.966) 0:06:11.113 ********** 2026-03-01 01:00:03.830332 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.830338 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.830345 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.830351 | orchestrator | 2026-03-01 01:00:03.830357 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-01 01:00:03.830364 | orchestrator | Sunday 01 March 2026 00:55:37 +0000 (0:00:00.709) 0:06:11.823 ********** 2026-03-01 01:00:03.830370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.830376 | orchestrator | 2026-03-01 01:00:03.830383 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-01 01:00:03.830389 | orchestrator | Sunday 01 March 2026 00:55:37 +0000 (0:00:00.743) 0:06:12.566 ********** 2026-03-01 01:00:03.830395 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.830401 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.830408 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.830414 | orchestrator | 2026-03-01 01:00:03.830421 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-01 01:00:03.830427 | orchestrator | Sunday 01 March 2026 00:55:38 +0000 (0:00:00.319) 0:06:12.886 ********** 2026-03-01 01:00:03.830434 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.830440 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.830446 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.830452 | orchestrator | 2026-03-01 01:00:03.830458 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-01 01:00:03.830464 | orchestrator | Sunday 01 March 2026 00:55:39 +0000 (0:00:01.217) 0:06:14.104 ********** 2026-03-01 01:00:03.830470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-01 01:00:03.830476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-01 01:00:03.830481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-01 01:00:03.830488 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.830493 | orchestrator | 2026-03-01 01:00:03.830500 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-01 01:00:03.830506 | orchestrator | Sunday 01 March 2026 00:55:39 +0000 (0:00:00.586) 0:06:14.691 ********** 2026-03-01 01:00:03.830513 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.830519 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.830525 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.830532 | orchestrator | 2026-03-01 01:00:03.830538 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-01 01:00:03.830544 | orchestrator | 2026-03-01 01:00:03.830550 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.830562 | orchestrator | Sunday 01 March 2026 00:55:40 +0000 (0:00:00.688) 0:06:15.379 ********** 2026-03-01 01:00:03.830569 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.830575 | orchestrator | 2026-03-01 01:00:03.830607 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.830616 | orchestrator | Sunday 01 March 2026 00:55:41 +0000 (0:00:00.470) 0:06:15.850 ********** 2026-03-01 01:00:03.830622 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.830628 | orchestrator | 2026-03-01 01:00:03.830635 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.830641 | orchestrator | Sunday 01 March 2026 00:55:41 +0000 (0:00:00.588) 0:06:16.438 ********** 2026-03-01 01:00:03.830647 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.830654 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.830661 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.830667 | orchestrator | 2026-03-01 01:00:03.830673 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.830679 | orchestrator | Sunday 01 March 2026 00:55:41 +0000 (0:00:00.268) 0:06:16.707 ********** 2026-03-01 01:00:03.830686 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.830692 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.830698 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.830705 | orchestrator | 2026-03-01 01:00:03.830711 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.830718 | orchestrator | Sunday 01 March 2026 00:55:42 +0000 (0:00:00.676) 0:06:17.383 ********** 2026-03-01 01:00:03.830724 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.830730 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.830737 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.830743 | orchestrator | 2026-03-01 01:00:03.830750 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.830756 | orchestrator | Sunday 01 March 2026 00:55:43 +0000 (0:00:00.724) 0:06:18.108 ********** 2026-03-01 01:00:03.830762 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.830768 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.830775 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.830781 | orchestrator | 2026-03-01 01:00:03.830788 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.830794 | orchestrator | Sunday 01 March 2026 00:55:44 +0000 (0:00:01.058) 0:06:19.167 ********** 2026-03-01 01:00:03.830813 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.830820 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.830826 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.830832 | orchestrator | 2026-03-01 01:00:03.830849 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.830855 | orchestrator | Sunday 01 March 2026 00:55:44 +0000 (0:00:00.295) 0:06:19.462 ********** 2026-03-01 01:00:03.830862 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.830868 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.830874 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.830879 | orchestrator | 2026-03-01 01:00:03.830882 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.830886 | orchestrator | Sunday 01 March 2026 00:55:44 +0000 (0:00:00.247) 0:06:19.710 ********** 2026-03-01 01:00:03.830890 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.830894 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.830897 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.830901 | orchestrator | 2026-03-01 01:00:03.830905 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.830908 | orchestrator | Sunday 01 March 2026 00:55:45 +0000 (0:00:00.271) 0:06:19.981 ********** 2026-03-01 01:00:03.830912 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.830920 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.830924 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.830928 | orchestrator | 2026-03-01 01:00:03.830932 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.830935 | orchestrator | Sunday 01 March 2026 00:55:46 +0000 (0:00:00.848) 0:06:20.830 ********** 2026-03-01 01:00:03.830939 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.830943 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.830946 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.830950 | orchestrator | 2026-03-01 01:00:03.830954 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.830958 | orchestrator | Sunday 01 March 2026 00:55:46 +0000 (0:00:00.685) 0:06:21.515 ********** 2026-03-01 01:00:03.830963 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.830969 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.830975 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.830980 | orchestrator | 2026-03-01 01:00:03.830986 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.830992 | orchestrator | Sunday 01 March 2026 00:55:47 +0000 (0:00:00.271) 0:06:21.786 ********** 2026-03-01 01:00:03.830998 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831003 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831009 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831014 | orchestrator | 2026-03-01 01:00:03.831020 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.831026 | orchestrator | Sunday 01 March 2026 00:55:47 +0000 (0:00:00.307) 0:06:22.093 ********** 2026-03-01 01:00:03.831033 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831040 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831045 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831048 | orchestrator | 2026-03-01 01:00:03.831052 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.831057 | orchestrator | Sunday 01 March 2026 00:55:47 +0000 (0:00:00.450) 0:06:22.544 ********** 2026-03-01 01:00:03.831061 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831065 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831068 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831072 | orchestrator | 2026-03-01 01:00:03.831076 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.831080 | orchestrator | Sunday 01 March 2026 00:55:48 +0000 (0:00:00.315) 0:06:22.860 ********** 2026-03-01 01:00:03.831083 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831087 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831091 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831094 | orchestrator | 2026-03-01 01:00:03.831098 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.831107 | orchestrator | Sunday 01 March 2026 00:55:48 +0000 (0:00:00.278) 0:06:23.138 ********** 2026-03-01 01:00:03.831111 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831115 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831118 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831122 | orchestrator | 2026-03-01 01:00:03.831126 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.831130 | orchestrator | Sunday 01 March 2026 00:55:48 +0000 (0:00:00.260) 0:06:23.399 ********** 2026-03-01 01:00:03.831133 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831137 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831142 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831148 | orchestrator | 2026-03-01 01:00:03.831156 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.831165 | orchestrator | Sunday 01 March 2026 00:55:49 +0000 (0:00:00.409) 0:06:23.809 ********** 2026-03-01 01:00:03.831171 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831177 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831183 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831194 | orchestrator | 2026-03-01 01:00:03.831200 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.831205 | orchestrator | Sunday 01 March 2026 00:55:49 +0000 (0:00:00.269) 0:06:24.079 ********** 2026-03-01 01:00:03.831210 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831216 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831222 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831228 | orchestrator | 2026-03-01 01:00:03.831234 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.831240 | orchestrator | Sunday 01 March 2026 00:55:49 +0000 (0:00:00.285) 0:06:24.365 ********** 2026-03-01 01:00:03.831247 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831253 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831260 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831266 | orchestrator | 2026-03-01 01:00:03.831272 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-01 01:00:03.831278 | orchestrator | Sunday 01 March 2026 00:55:50 +0000 (0:00:00.586) 0:06:24.951 ********** 2026-03-01 01:00:03.831285 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831291 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831297 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831304 | orchestrator | 2026-03-01 01:00:03.831308 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-01 01:00:03.831315 | orchestrator | Sunday 01 March 2026 00:55:50 +0000 (0:00:00.270) 0:06:25.221 ********** 2026-03-01 01:00:03.831319 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:00:03.831323 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:00:03.831327 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:00:03.831331 | orchestrator | 2026-03-01 01:00:03.831334 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-01 01:00:03.831338 | orchestrator | Sunday 01 March 2026 00:55:50 +0000 (0:00:00.468) 0:06:25.690 ********** 2026-03-01 01:00:03.831342 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.831346 | orchestrator | 2026-03-01 01:00:03.831349 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-01 01:00:03.831354 | orchestrator | Sunday 01 March 2026 00:55:51 +0000 (0:00:00.450) 0:06:26.140 ********** 2026-03-01 01:00:03.831360 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831366 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831370 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831374 | orchestrator | 2026-03-01 01:00:03.831378 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-01 01:00:03.831381 | orchestrator | Sunday 01 March 2026 00:55:51 +0000 (0:00:00.413) 0:06:26.553 ********** 2026-03-01 01:00:03.831385 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831389 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831393 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831396 | orchestrator | 2026-03-01 01:00:03.831400 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-01 01:00:03.831404 | orchestrator | Sunday 01 March 2026 00:55:52 +0000 (0:00:00.265) 0:06:26.819 ********** 2026-03-01 01:00:03.831408 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831411 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831415 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831419 | orchestrator | 2026-03-01 01:00:03.831423 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-01 01:00:03.831426 | orchestrator | Sunday 01 March 2026 00:55:52 +0000 (0:00:00.607) 0:06:27.426 ********** 2026-03-01 01:00:03.831430 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831434 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831437 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831444 | orchestrator | 2026-03-01 01:00:03.831448 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-01 01:00:03.831452 | orchestrator | Sunday 01 March 2026 00:55:52 +0000 (0:00:00.264) 0:06:27.691 ********** 2026-03-01 01:00:03.831456 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-01 01:00:03.831460 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-01 01:00:03.831463 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-01 01:00:03.831467 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-01 01:00:03.831471 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-01 01:00:03.831475 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-01 01:00:03.831484 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-01 01:00:03.831488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-01 01:00:03.831492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-01 01:00:03.831496 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-01 01:00:03.831500 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-01 01:00:03.831503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-01 01:00:03.831507 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-01 01:00:03.831511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-01 01:00:03.831515 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-01 01:00:03.831518 | orchestrator | 2026-03-01 01:00:03.831522 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-01 01:00:03.831526 | orchestrator | Sunday 01 March 2026 00:55:56 +0000 (0:00:03.297) 0:06:30.988 ********** 2026-03-01 01:00:03.831530 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831533 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831537 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831541 | orchestrator | 2026-03-01 01:00:03.831545 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-01 01:00:03.831548 | orchestrator | Sunday 01 March 2026 00:55:56 +0000 (0:00:00.267) 0:06:31.256 ********** 2026-03-01 01:00:03.831552 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.831556 | orchestrator | 2026-03-01 01:00:03.831560 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-01 01:00:03.831563 | orchestrator | Sunday 01 March 2026 00:55:56 +0000 (0:00:00.443) 0:06:31.699 ********** 2026-03-01 01:00:03.831567 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-01 01:00:03.831571 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-01 01:00:03.831577 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-01 01:00:03.831580 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-01 01:00:03.831584 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-01 01:00:03.831588 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-01 01:00:03.831592 | orchestrator | 2026-03-01 01:00:03.831596 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-01 01:00:03.831599 | orchestrator | Sunday 01 March 2026 00:55:58 +0000 (0:00:01.120) 0:06:32.819 ********** 2026-03-01 01:00:03.831603 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.831617 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-01 01:00:03.831621 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:00:03.831625 | orchestrator | 2026-03-01 01:00:03.831629 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-01 01:00:03.831633 | orchestrator | Sunday 01 March 2026 00:56:00 +0000 (0:00:02.020) 0:06:34.840 ********** 2026-03-01 01:00:03.831637 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 01:00:03.831640 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-01 01:00:03.831644 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.831648 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 01:00:03.831652 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-01 01:00:03.831656 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.831659 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 01:00:03.831663 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-01 01:00:03.831667 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.831671 | orchestrator | 2026-03-01 01:00:03.831675 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-01 01:00:03.831678 | orchestrator | Sunday 01 March 2026 00:56:01 +0000 (0:00:01.112) 0:06:35.952 ********** 2026-03-01 01:00:03.831682 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.831686 | orchestrator | 2026-03-01 01:00:03.831690 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-01 01:00:03.831693 | orchestrator | Sunday 01 March 2026 00:56:03 +0000 (0:00:02.322) 0:06:38.275 ********** 2026-03-01 01:00:03.831697 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.831701 | orchestrator | 2026-03-01 01:00:03.831705 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-01 01:00:03.831709 | orchestrator | Sunday 01 March 2026 00:56:04 +0000 (0:00:00.718) 0:06:38.993 ********** 2026-03-01 01:00:03.831713 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-024d169c-08bb-513a-b447-fe5a7c318e63', 'data_vg': 'ceph-024d169c-08bb-513a-b447-fe5a7c318e63'}) 2026-03-01 01:00:03.831717 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d', 'data_vg': 'ceph-14f5527d-3d57-5d3d-81f7-fd6f0358fc1d'}) 2026-03-01 01:00:03.831721 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-31f22992-0e1a-5ef5-a8b3-14a12910c272', 'data_vg': 'ceph-31f22992-0e1a-5ef5-a8b3-14a12910c272'}) 2026-03-01 01:00:03.831727 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b33a93dc-e50a-56e8-9161-d310a7d41007', 'data_vg': 'ceph-b33a93dc-e50a-56e8-9161-d310a7d41007'}) 2026-03-01 01:00:03.831731 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3', 'data_vg': 'ceph-71bbeaa0-80e8-52b0-b7ca-02965d05b7d3'}) 2026-03-01 01:00:03.831735 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d1a7437a-a9c6-5afd-b028-da6f65a62b89', 'data_vg': 'ceph-d1a7437a-a9c6-5afd-b028-da6f65a62b89'}) 2026-03-01 01:00:03.831739 | orchestrator | 2026-03-01 01:00:03.831743 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-01 01:00:03.831747 | orchestrator | Sunday 01 March 2026 00:56:46 +0000 (0:00:42.014) 0:07:21.008 ********** 2026-03-01 01:00:03.831750 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831754 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831758 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831762 | orchestrator | 2026-03-01 01:00:03.831766 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-01 01:00:03.831770 | orchestrator | Sunday 01 March 2026 00:56:46 +0000 (0:00:00.341) 0:07:21.350 ********** 2026-03-01 01:00:03.831773 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.831780 | orchestrator | 2026-03-01 01:00:03.831784 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-01 01:00:03.831788 | orchestrator | Sunday 01 March 2026 00:56:47 +0000 (0:00:00.985) 0:07:22.335 ********** 2026-03-01 01:00:03.831791 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831795 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831811 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831818 | orchestrator | 2026-03-01 01:00:03.831824 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-01 01:00:03.831830 | orchestrator | Sunday 01 March 2026 00:56:48 +0000 (0:00:00.713) 0:07:23.048 ********** 2026-03-01 01:00:03.831836 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.831842 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.831848 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.831854 | orchestrator | 2026-03-01 01:00:03.831860 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-01 01:00:03.831867 | orchestrator | Sunday 01 March 2026 00:56:50 +0000 (0:00:02.371) 0:07:25.420 ********** 2026-03-01 01:00:03.831879 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.831886 | orchestrator | 2026-03-01 01:00:03.831890 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-01 01:00:03.831894 | orchestrator | Sunday 01 March 2026 00:56:51 +0000 (0:00:00.766) 0:07:26.186 ********** 2026-03-01 01:00:03.831898 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.831902 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.831906 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.831909 | orchestrator | 2026-03-01 01:00:03.831913 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-01 01:00:03.831917 | orchestrator | Sunday 01 March 2026 00:56:52 +0000 (0:00:01.144) 0:07:27.330 ********** 2026-03-01 01:00:03.831921 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.831925 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.831928 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.831932 | orchestrator | 2026-03-01 01:00:03.831936 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-01 01:00:03.831940 | orchestrator | Sunday 01 March 2026 00:56:53 +0000 (0:00:01.110) 0:07:28.441 ********** 2026-03-01 01:00:03.831943 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.831947 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.831951 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.831954 | orchestrator | 2026-03-01 01:00:03.831958 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-01 01:00:03.831962 | orchestrator | Sunday 01 March 2026 00:56:55 +0000 (0:00:01.867) 0:07:30.308 ********** 2026-03-01 01:00:03.831966 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831969 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831973 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831977 | orchestrator | 2026-03-01 01:00:03.831981 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-01 01:00:03.831984 | orchestrator | Sunday 01 March 2026 00:56:56 +0000 (0:00:00.576) 0:07:30.885 ********** 2026-03-01 01:00:03.831988 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.831992 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.831996 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.831999 | orchestrator | 2026-03-01 01:00:03.832003 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-01 01:00:03.832007 | orchestrator | Sunday 01 March 2026 00:56:56 +0000 (0:00:00.309) 0:07:31.194 ********** 2026-03-01 01:00:03.832011 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-01 01:00:03.832014 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-01 01:00:03.832018 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-01 01:00:03.832022 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-01 01:00:03.832029 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-01 01:00:03.832032 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-01 01:00:03.832036 | orchestrator | 2026-03-01 01:00:03.832040 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-01 01:00:03.832044 | orchestrator | Sunday 01 March 2026 00:56:57 +0000 (0:00:01.121) 0:07:32.316 ********** 2026-03-01 01:00:03.832047 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-01 01:00:03.832051 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-01 01:00:03.832055 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-01 01:00:03.832059 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-01 01:00:03.832062 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-01 01:00:03.832066 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-01 01:00:03.832070 | orchestrator | 2026-03-01 01:00:03.832076 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-01 01:00:03.832080 | orchestrator | Sunday 01 March 2026 00:56:59 +0000 (0:00:02.379) 0:07:34.696 ********** 2026-03-01 01:00:03.832084 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-01 01:00:03.832088 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-01 01:00:03.832091 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-01 01:00:03.832095 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-01 01:00:03.832099 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-01 01:00:03.832102 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-01 01:00:03.832106 | orchestrator | 2026-03-01 01:00:03.832110 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-01 01:00:03.832114 | orchestrator | Sunday 01 March 2026 00:57:04 +0000 (0:00:04.040) 0:07:38.737 ********** 2026-03-01 01:00:03.832117 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832121 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832125 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.832129 | orchestrator | 2026-03-01 01:00:03.832132 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-01 01:00:03.832136 | orchestrator | Sunday 01 March 2026 00:57:06 +0000 (0:00:02.123) 0:07:40.860 ********** 2026-03-01 01:00:03.832140 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832144 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832148 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-01 01:00:03.832151 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.832155 | orchestrator | 2026-03-01 01:00:03.832159 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-01 01:00:03.832163 | orchestrator | Sunday 01 March 2026 00:57:18 +0000 (0:00:12.096) 0:07:52.957 ********** 2026-03-01 01:00:03.832167 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832170 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832174 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832178 | orchestrator | 2026-03-01 01:00:03.832181 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-01 01:00:03.832185 | orchestrator | Sunday 01 March 2026 00:57:19 +0000 (0:00:00.927) 0:07:53.884 ********** 2026-03-01 01:00:03.832189 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832193 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832198 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832202 | orchestrator | 2026-03-01 01:00:03.832206 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-01 01:00:03.832209 | orchestrator | Sunday 01 March 2026 00:57:19 +0000 (0:00:00.293) 0:07:54.178 ********** 2026-03-01 01:00:03.832213 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.832217 | orchestrator | 2026-03-01 01:00:03.832221 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-01 01:00:03.832227 | orchestrator | Sunday 01 March 2026 00:57:19 +0000 (0:00:00.449) 0:07:54.627 ********** 2026-03-01 01:00:03.832231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.832234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.832238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.832242 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832246 | orchestrator | 2026-03-01 01:00:03.832250 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-01 01:00:03.832253 | orchestrator | Sunday 01 March 2026 00:57:20 +0000 (0:00:00.962) 0:07:55.589 ********** 2026-03-01 01:00:03.832257 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832261 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832264 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832268 | orchestrator | 2026-03-01 01:00:03.832272 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-01 01:00:03.832276 | orchestrator | Sunday 01 March 2026 00:57:21 +0000 (0:00:00.341) 0:07:55.931 ********** 2026-03-01 01:00:03.832279 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832283 | orchestrator | 2026-03-01 01:00:03.832287 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-01 01:00:03.832291 | orchestrator | Sunday 01 March 2026 00:57:21 +0000 (0:00:00.229) 0:07:56.161 ********** 2026-03-01 01:00:03.832294 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832298 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832302 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832305 | orchestrator | 2026-03-01 01:00:03.832309 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-01 01:00:03.832313 | orchestrator | Sunday 01 March 2026 00:57:21 +0000 (0:00:00.307) 0:07:56.469 ********** 2026-03-01 01:00:03.832317 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832320 | orchestrator | 2026-03-01 01:00:03.832324 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-01 01:00:03.832328 | orchestrator | Sunday 01 March 2026 00:57:21 +0000 (0:00:00.219) 0:07:56.688 ********** 2026-03-01 01:00:03.832332 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832335 | orchestrator | 2026-03-01 01:00:03.832339 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-01 01:00:03.832343 | orchestrator | Sunday 01 March 2026 00:57:22 +0000 (0:00:00.217) 0:07:56.906 ********** 2026-03-01 01:00:03.832347 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832350 | orchestrator | 2026-03-01 01:00:03.832354 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-01 01:00:03.832358 | orchestrator | Sunday 01 March 2026 00:57:22 +0000 (0:00:00.128) 0:07:57.034 ********** 2026-03-01 01:00:03.832362 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832365 | orchestrator | 2026-03-01 01:00:03.832369 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-01 01:00:03.832373 | orchestrator | Sunday 01 March 2026 00:57:22 +0000 (0:00:00.205) 0:07:57.240 ********** 2026-03-01 01:00:03.832379 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832383 | orchestrator | 2026-03-01 01:00:03.832386 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-01 01:00:03.832390 | orchestrator | Sunday 01 March 2026 00:57:23 +0000 (0:00:00.764) 0:07:58.005 ********** 2026-03-01 01:00:03.832394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.832398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.832401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.832405 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832409 | orchestrator | 2026-03-01 01:00:03.832412 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-01 01:00:03.832416 | orchestrator | Sunday 01 March 2026 00:57:23 +0000 (0:00:00.392) 0:07:58.398 ********** 2026-03-01 01:00:03.832422 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832426 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832430 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832433 | orchestrator | 2026-03-01 01:00:03.832437 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-01 01:00:03.832441 | orchestrator | Sunday 01 March 2026 00:57:23 +0000 (0:00:00.318) 0:07:58.717 ********** 2026-03-01 01:00:03.832445 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832448 | orchestrator | 2026-03-01 01:00:03.832452 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-01 01:00:03.832456 | orchestrator | Sunday 01 March 2026 00:57:24 +0000 (0:00:00.301) 0:07:59.018 ********** 2026-03-01 01:00:03.832460 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832463 | orchestrator | 2026-03-01 01:00:03.832467 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-01 01:00:03.832471 | orchestrator | 2026-03-01 01:00:03.832475 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.832478 | orchestrator | Sunday 01 March 2026 00:57:25 +0000 (0:00:00.883) 0:07:59.902 ********** 2026-03-01 01:00:03.832482 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.832487 | orchestrator | 2026-03-01 01:00:03.832491 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.832497 | orchestrator | Sunday 01 March 2026 00:57:26 +0000 (0:00:01.220) 0:08:01.123 ********** 2026-03-01 01:00:03.832501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.832505 | orchestrator | 2026-03-01 01:00:03.832508 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.832512 | orchestrator | Sunday 01 March 2026 00:57:27 +0000 (0:00:01.012) 0:08:02.136 ********** 2026-03-01 01:00:03.832516 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832519 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832523 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832527 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.832531 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.832534 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.832538 | orchestrator | 2026-03-01 01:00:03.832542 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.832546 | orchestrator | Sunday 01 March 2026 00:57:28 +0000 (0:00:01.191) 0:08:03.327 ********** 2026-03-01 01:00:03.832550 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832553 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832557 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832561 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832565 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832568 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832572 | orchestrator | 2026-03-01 01:00:03.832576 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.832580 | orchestrator | Sunday 01 March 2026 00:57:29 +0000 (0:00:00.676) 0:08:04.004 ********** 2026-03-01 01:00:03.832583 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832587 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832591 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832595 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832598 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832602 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832606 | orchestrator | 2026-03-01 01:00:03.832610 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.832613 | orchestrator | Sunday 01 March 2026 00:57:30 +0000 (0:00:00.923) 0:08:04.927 ********** 2026-03-01 01:00:03.832621 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832624 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832628 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832632 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832636 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832639 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832643 | orchestrator | 2026-03-01 01:00:03.832647 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.832651 | orchestrator | Sunday 01 March 2026 00:57:30 +0000 (0:00:00.659) 0:08:05.587 ********** 2026-03-01 01:00:03.832654 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832658 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832662 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832666 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.832669 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.832673 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.832677 | orchestrator | 2026-03-01 01:00:03.832680 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.832684 | orchestrator | Sunday 01 March 2026 00:57:32 +0000 (0:00:01.158) 0:08:06.746 ********** 2026-03-01 01:00:03.832688 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832691 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832695 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832699 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832703 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832709 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832713 | orchestrator | 2026-03-01 01:00:03.832716 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.832720 | orchestrator | Sunday 01 March 2026 00:57:32 +0000 (0:00:00.595) 0:08:07.341 ********** 2026-03-01 01:00:03.832724 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832727 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832731 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832735 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832739 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832742 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832746 | orchestrator | 2026-03-01 01:00:03.832750 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.832754 | orchestrator | Sunday 01 March 2026 00:57:33 +0000 (0:00:00.859) 0:08:08.200 ********** 2026-03-01 01:00:03.832757 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832761 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832765 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832769 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.832773 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.832776 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.832780 | orchestrator | 2026-03-01 01:00:03.832784 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.832787 | orchestrator | Sunday 01 March 2026 00:57:34 +0000 (0:00:01.196) 0:08:09.397 ********** 2026-03-01 01:00:03.832791 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832795 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832809 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832813 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.832817 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.832821 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.832824 | orchestrator | 2026-03-01 01:00:03.832828 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.832832 | orchestrator | Sunday 01 March 2026 00:57:36 +0000 (0:00:01.518) 0:08:10.915 ********** 2026-03-01 01:00:03.832836 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832839 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832843 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832847 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832850 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832857 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832861 | orchestrator | 2026-03-01 01:00:03.832864 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.832870 | orchestrator | Sunday 01 March 2026 00:57:36 +0000 (0:00:00.559) 0:08:11.475 ********** 2026-03-01 01:00:03.832874 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.832877 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.832881 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.832885 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.832889 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.832892 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.832896 | orchestrator | 2026-03-01 01:00:03.832900 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.832904 | orchestrator | Sunday 01 March 2026 00:57:37 +0000 (0:00:00.801) 0:08:12.277 ********** 2026-03-01 01:00:03.832907 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832911 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832915 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832918 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832922 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832926 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832930 | orchestrator | 2026-03-01 01:00:03.832933 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.832937 | orchestrator | Sunday 01 March 2026 00:57:38 +0000 (0:00:00.567) 0:08:12.845 ********** 2026-03-01 01:00:03.832941 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832945 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832948 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.832952 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.832956 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.832961 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.832967 | orchestrator | 2026-03-01 01:00:03.832972 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.832978 | orchestrator | Sunday 01 March 2026 00:57:38 +0000 (0:00:00.804) 0:08:13.649 ********** 2026-03-01 01:00:03.832984 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.832990 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.832996 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833003 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.833009 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.833015 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.833021 | orchestrator | 2026-03-01 01:00:03.833028 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.833034 | orchestrator | Sunday 01 March 2026 00:57:39 +0000 (0:00:00.570) 0:08:14.220 ********** 2026-03-01 01:00:03.833038 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833042 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833045 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833049 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.833053 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.833057 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.833060 | orchestrator | 2026-03-01 01:00:03.833064 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.833068 | orchestrator | Sunday 01 March 2026 00:57:40 +0000 (0:00:00.794) 0:08:15.015 ********** 2026-03-01 01:00:03.833071 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833075 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833079 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833083 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:03.833087 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:03.833091 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:03.833094 | orchestrator | 2026-03-01 01:00:03.833098 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.833105 | orchestrator | Sunday 01 March 2026 00:57:40 +0000 (0:00:00.649) 0:08:15.664 ********** 2026-03-01 01:00:03.833109 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833113 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833116 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833120 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.833127 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.833131 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.833135 | orchestrator | 2026-03-01 01:00:03.833139 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.833142 | orchestrator | Sunday 01 March 2026 00:57:41 +0000 (0:00:00.845) 0:08:16.509 ********** 2026-03-01 01:00:03.833146 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833150 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833155 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833161 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.833166 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.833172 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.833178 | orchestrator | 2026-03-01 01:00:03.833184 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.833191 | orchestrator | Sunday 01 March 2026 00:57:42 +0000 (0:00:00.614) 0:08:17.124 ********** 2026-03-01 01:00:03.833198 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833204 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833211 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833216 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.833220 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.833224 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.833227 | orchestrator | 2026-03-01 01:00:03.833231 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-01 01:00:03.833235 | orchestrator | Sunday 01 March 2026 00:57:43 +0000 (0:00:01.279) 0:08:18.404 ********** 2026-03-01 01:00:03.833239 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.833243 | orchestrator | 2026-03-01 01:00:03.833247 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-01 01:00:03.833250 | orchestrator | Sunday 01 March 2026 00:57:47 +0000 (0:00:04.108) 0:08:22.513 ********** 2026-03-01 01:00:03.833254 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.833258 | orchestrator | 2026-03-01 01:00:03.833261 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-01 01:00:03.833265 | orchestrator | Sunday 01 March 2026 00:57:49 +0000 (0:00:02.063) 0:08:24.576 ********** 2026-03-01 01:00:03.833269 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.833273 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.833276 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.833280 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.833284 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.833288 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.833291 | orchestrator | 2026-03-01 01:00:03.833298 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-01 01:00:03.833302 | orchestrator | Sunday 01 March 2026 00:57:51 +0000 (0:00:01.846) 0:08:26.422 ********** 2026-03-01 01:00:03.833305 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.833309 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.833313 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.833317 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.833320 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.833324 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.833328 | orchestrator | 2026-03-01 01:00:03.833332 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-01 01:00:03.833335 | orchestrator | Sunday 01 March 2026 00:57:52 +0000 (0:00:01.007) 0:08:27.430 ********** 2026-03-01 01:00:03.833339 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.833348 | orchestrator | 2026-03-01 01:00:03.833352 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-01 01:00:03.833355 | orchestrator | Sunday 01 March 2026 00:57:53 +0000 (0:00:01.235) 0:08:28.665 ********** 2026-03-01 01:00:03.833359 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.833363 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.833367 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.833370 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.833374 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.833378 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.833381 | orchestrator | 2026-03-01 01:00:03.833385 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-01 01:00:03.833389 | orchestrator | Sunday 01 March 2026 00:57:55 +0000 (0:00:01.882) 0:08:30.547 ********** 2026-03-01 01:00:03.833393 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.833396 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.833400 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.833404 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.833407 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.833411 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.833415 | orchestrator | 2026-03-01 01:00:03.833419 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-01 01:00:03.833422 | orchestrator | Sunday 01 March 2026 00:57:59 +0000 (0:00:03.778) 0:08:34.326 ********** 2026-03-01 01:00:03.833426 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:03.833430 | orchestrator | 2026-03-01 01:00:03.833434 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-01 01:00:03.833438 | orchestrator | Sunday 01 March 2026 00:58:00 +0000 (0:00:01.334) 0:08:35.661 ********** 2026-03-01 01:00:03.833441 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833445 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833449 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833453 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.833456 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.833460 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.833464 | orchestrator | 2026-03-01 01:00:03.833467 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-01 01:00:03.833471 | orchestrator | Sunday 01 March 2026 00:58:01 +0000 (0:00:00.834) 0:08:36.496 ********** 2026-03-01 01:00:03.833475 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.833479 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.833483 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.833486 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:03.833493 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:03.833497 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:03.833501 | orchestrator | 2026-03-01 01:00:03.833504 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-01 01:00:03.833508 | orchestrator | Sunday 01 March 2026 00:58:04 +0000 (0:00:02.382) 0:08:38.878 ********** 2026-03-01 01:00:03.833512 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833515 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833519 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833523 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:03.833526 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:03.833530 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:03.833534 | orchestrator | 2026-03-01 01:00:03.833538 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-01 01:00:03.833541 | orchestrator | 2026-03-01 01:00:03.833545 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.833549 | orchestrator | Sunday 01 March 2026 00:58:05 +0000 (0:00:01.087) 0:08:39.965 ********** 2026-03-01 01:00:03.833553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.833559 | orchestrator | 2026-03-01 01:00:03.833563 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.833567 | orchestrator | Sunday 01 March 2026 00:58:05 +0000 (0:00:00.489) 0:08:40.454 ********** 2026-03-01 01:00:03.833571 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.833574 | orchestrator | 2026-03-01 01:00:03.833578 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.833582 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.727) 0:08:41.182 ********** 2026-03-01 01:00:03.833586 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833589 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833593 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833597 | orchestrator | 2026-03-01 01:00:03.833601 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.833604 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.299) 0:08:41.482 ********** 2026-03-01 01:00:03.833608 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833612 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833615 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833619 | orchestrator | 2026-03-01 01:00:03.833625 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.833629 | orchestrator | Sunday 01 March 2026 00:58:07 +0000 (0:00:00.725) 0:08:42.207 ********** 2026-03-01 01:00:03.833632 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833636 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833640 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833644 | orchestrator | 2026-03-01 01:00:03.833647 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.833651 | orchestrator | Sunday 01 March 2026 00:58:08 +0000 (0:00:01.045) 0:08:43.252 ********** 2026-03-01 01:00:03.833655 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833659 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833662 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833666 | orchestrator | 2026-03-01 01:00:03.833670 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.833673 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:00.723) 0:08:43.976 ********** 2026-03-01 01:00:03.833677 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833681 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833685 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833688 | orchestrator | 2026-03-01 01:00:03.833692 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.833696 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:00.316) 0:08:44.292 ********** 2026-03-01 01:00:03.833700 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833703 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833707 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833711 | orchestrator | 2026-03-01 01:00:03.833714 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.833718 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:00.324) 0:08:44.617 ********** 2026-03-01 01:00:03.833725 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833731 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833737 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833742 | orchestrator | 2026-03-01 01:00:03.833748 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.833754 | orchestrator | Sunday 01 March 2026 00:58:10 +0000 (0:00:00.630) 0:08:45.248 ********** 2026-03-01 01:00:03.833761 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833768 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833772 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833776 | orchestrator | 2026-03-01 01:00:03.833782 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.833786 | orchestrator | Sunday 01 March 2026 00:58:11 +0000 (0:00:00.698) 0:08:45.946 ********** 2026-03-01 01:00:03.833790 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833793 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833824 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833829 | orchestrator | 2026-03-01 01:00:03.833833 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.833837 | orchestrator | Sunday 01 March 2026 00:58:11 +0000 (0:00:00.717) 0:08:46.663 ********** 2026-03-01 01:00:03.833841 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833846 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833852 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833858 | orchestrator | 2026-03-01 01:00:03.833865 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.833872 | orchestrator | Sunday 01 March 2026 00:58:12 +0000 (0:00:00.342) 0:08:47.006 ********** 2026-03-01 01:00:03.833882 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.833889 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.833895 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.833902 | orchestrator | 2026-03-01 01:00:03.833913 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.833920 | orchestrator | Sunday 01 March 2026 00:58:12 +0000 (0:00:00.595) 0:08:47.602 ********** 2026-03-01 01:00:03.833926 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833932 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833938 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833944 | orchestrator | 2026-03-01 01:00:03.833951 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.833958 | orchestrator | Sunday 01 March 2026 00:58:13 +0000 (0:00:00.347) 0:08:47.949 ********** 2026-03-01 01:00:03.833965 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.833972 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.833978 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.833985 | orchestrator | 2026-03-01 01:00:03.833991 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.833998 | orchestrator | Sunday 01 March 2026 00:58:13 +0000 (0:00:00.339) 0:08:48.289 ********** 2026-03-01 01:00:03.834005 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834040 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834045 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834049 | orchestrator | 2026-03-01 01:00:03.834053 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.834057 | orchestrator | Sunday 01 March 2026 00:58:13 +0000 (0:00:00.348) 0:08:48.638 ********** 2026-03-01 01:00:03.834060 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834064 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.834068 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.834072 | orchestrator | 2026-03-01 01:00:03.834076 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.834079 | orchestrator | Sunday 01 March 2026 00:58:14 +0000 (0:00:00.531) 0:08:49.170 ********** 2026-03-01 01:00:03.834083 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834087 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.834091 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.834094 | orchestrator | 2026-03-01 01:00:03.834098 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.834102 | orchestrator | Sunday 01 March 2026 00:58:14 +0000 (0:00:00.253) 0:08:49.424 ********** 2026-03-01 01:00:03.834106 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834109 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.834113 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.834117 | orchestrator | 2026-03-01 01:00:03.834121 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.834133 | orchestrator | Sunday 01 March 2026 00:58:14 +0000 (0:00:00.247) 0:08:49.671 ********** 2026-03-01 01:00:03.834137 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834141 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834145 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834148 | orchestrator | 2026-03-01 01:00:03.834152 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.834156 | orchestrator | Sunday 01 March 2026 00:58:15 +0000 (0:00:00.325) 0:08:49.997 ********** 2026-03-01 01:00:03.834160 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834164 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834167 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834171 | orchestrator | 2026-03-01 01:00:03.834175 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-01 01:00:03.834179 | orchestrator | Sunday 01 March 2026 00:58:15 +0000 (0:00:00.710) 0:08:50.707 ********** 2026-03-01 01:00:03.834182 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.834186 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.834190 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-01 01:00:03.834194 | orchestrator | 2026-03-01 01:00:03.834198 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-01 01:00:03.834201 | orchestrator | Sunday 01 March 2026 00:58:16 +0000 (0:00:00.361) 0:08:51.069 ********** 2026-03-01 01:00:03.834205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.834209 | orchestrator | 2026-03-01 01:00:03.834213 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-01 01:00:03.834216 | orchestrator | Sunday 01 March 2026 00:58:18 +0000 (0:00:02.233) 0:08:53.302 ********** 2026-03-01 01:00:03.834221 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-01 01:00:03.834226 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834230 | orchestrator | 2026-03-01 01:00:03.834234 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-01 01:00:03.834237 | orchestrator | Sunday 01 March 2026 00:58:18 +0000 (0:00:00.251) 0:08:53.554 ********** 2026-03-01 01:00:03.834242 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:00:03.834251 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:00:03.834255 | orchestrator | 2026-03-01 01:00:03.834259 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-01 01:00:03.834263 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:08.058) 0:09:01.613 ********** 2026-03-01 01:00:03.834266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:00:03.834270 | orchestrator | 2026-03-01 01:00:03.834277 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-01 01:00:03.834281 | orchestrator | Sunday 01 March 2026 00:58:30 +0000 (0:00:03.652) 0:09:05.266 ********** 2026-03-01 01:00:03.834285 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.834289 | orchestrator | 2026-03-01 01:00:03.834292 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-01 01:00:03.834296 | orchestrator | Sunday 01 March 2026 00:58:31 +0000 (0:00:00.553) 0:09:05.820 ********** 2026-03-01 01:00:03.834300 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-01 01:00:03.834307 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-01 01:00:03.834310 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-01 01:00:03.834314 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-01 01:00:03.834318 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-01 01:00:03.834323 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-01 01:00:03.834329 | orchestrator | 2026-03-01 01:00:03.834337 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-01 01:00:03.834346 | orchestrator | Sunday 01 March 2026 00:58:32 +0000 (0:00:01.161) 0:09:06.982 ********** 2026-03-01 01:00:03.834352 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.834358 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-01 01:00:03.834364 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:00:03.834370 | orchestrator | 2026-03-01 01:00:03.834376 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-01 01:00:03.834381 | orchestrator | Sunday 01 March 2026 00:58:34 +0000 (0:00:02.641) 0:09:09.623 ********** 2026-03-01 01:00:03.834386 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 01:00:03.834392 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-01 01:00:03.834399 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834405 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 01:00:03.834411 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-01 01:00:03.834420 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834426 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 01:00:03.834432 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-01 01:00:03.834439 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834444 | orchestrator | 2026-03-01 01:00:03.834448 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-01 01:00:03.834451 | orchestrator | Sunday 01 March 2026 00:58:37 +0000 (0:00:02.137) 0:09:11.760 ********** 2026-03-01 01:00:03.834455 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834459 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834462 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834466 | orchestrator | 2026-03-01 01:00:03.834470 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-01 01:00:03.834474 | orchestrator | Sunday 01 March 2026 00:58:39 +0000 (0:00:02.728) 0:09:14.489 ********** 2026-03-01 01:00:03.834477 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834481 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.834485 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.834489 | orchestrator | 2026-03-01 01:00:03.834492 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-01 01:00:03.834496 | orchestrator | Sunday 01 March 2026 00:58:40 +0000 (0:00:00.395) 0:09:14.884 ********** 2026-03-01 01:00:03.834500 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.834503 | orchestrator | 2026-03-01 01:00:03.834507 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-01 01:00:03.834511 | orchestrator | Sunday 01 March 2026 00:58:41 +0000 (0:00:01.349) 0:09:16.233 ********** 2026-03-01 01:00:03.834515 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.834518 | orchestrator | 2026-03-01 01:00:03.834522 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-01 01:00:03.834526 | orchestrator | Sunday 01 March 2026 00:58:42 +0000 (0:00:00.858) 0:09:17.092 ********** 2026-03-01 01:00:03.834530 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834536 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834546 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834552 | orchestrator | 2026-03-01 01:00:03.834558 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-01 01:00:03.834564 | orchestrator | Sunday 01 March 2026 00:58:43 +0000 (0:00:01.196) 0:09:18.289 ********** 2026-03-01 01:00:03.834570 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834575 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834581 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834587 | orchestrator | 2026-03-01 01:00:03.834593 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-01 01:00:03.834599 | orchestrator | Sunday 01 March 2026 00:58:44 +0000 (0:00:01.253) 0:09:19.543 ********** 2026-03-01 01:00:03.834606 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834612 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834618 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834624 | orchestrator | 2026-03-01 01:00:03.834630 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-01 01:00:03.834634 | orchestrator | Sunday 01 March 2026 00:58:46 +0000 (0:00:01.614) 0:09:21.158 ********** 2026-03-01 01:00:03.834638 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834642 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834646 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834649 | orchestrator | 2026-03-01 01:00:03.834664 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-01 01:00:03.834668 | orchestrator | Sunday 01 March 2026 00:58:48 +0000 (0:00:02.006) 0:09:23.164 ********** 2026-03-01 01:00:03.834672 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834676 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834679 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834683 | orchestrator | 2026-03-01 01:00:03.834687 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-01 01:00:03.834691 | orchestrator | Sunday 01 March 2026 00:58:49 +0000 (0:00:01.320) 0:09:24.484 ********** 2026-03-01 01:00:03.834695 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834698 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834702 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834706 | orchestrator | 2026-03-01 01:00:03.834710 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-01 01:00:03.834713 | orchestrator | Sunday 01 March 2026 00:58:50 +0000 (0:00:00.673) 0:09:25.158 ********** 2026-03-01 01:00:03.834717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.834721 | orchestrator | 2026-03-01 01:00:03.834725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-01 01:00:03.834729 | orchestrator | Sunday 01 March 2026 00:58:51 +0000 (0:00:00.607) 0:09:25.765 ********** 2026-03-01 01:00:03.834732 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834736 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834740 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834744 | orchestrator | 2026-03-01 01:00:03.834748 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-01 01:00:03.834751 | orchestrator | Sunday 01 March 2026 00:58:51 +0000 (0:00:00.289) 0:09:26.055 ********** 2026-03-01 01:00:03.834755 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.834759 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.834763 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.834766 | orchestrator | 2026-03-01 01:00:03.834770 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-01 01:00:03.834777 | orchestrator | Sunday 01 March 2026 00:58:52 +0000 (0:00:01.212) 0:09:27.267 ********** 2026-03-01 01:00:03.834783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.834790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.834812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.834826 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834830 | orchestrator | 2026-03-01 01:00:03.834834 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-01 01:00:03.834837 | orchestrator | Sunday 01 March 2026 00:58:53 +0000 (0:00:00.766) 0:09:28.034 ********** 2026-03-01 01:00:03.834841 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834845 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834849 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834852 | orchestrator | 2026-03-01 01:00:03.834856 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-01 01:00:03.834860 | orchestrator | 2026-03-01 01:00:03.834864 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-01 01:00:03.834867 | orchestrator | Sunday 01 March 2026 00:58:53 +0000 (0:00:00.624) 0:09:28.659 ********** 2026-03-01 01:00:03.834871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.834875 | orchestrator | 2026-03-01 01:00:03.834879 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-01 01:00:03.834883 | orchestrator | Sunday 01 March 2026 00:58:54 +0000 (0:00:00.442) 0:09:29.101 ********** 2026-03-01 01:00:03.834887 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.834892 | orchestrator | 2026-03-01 01:00:03.834899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-01 01:00:03.834904 | orchestrator | Sunday 01 March 2026 00:58:54 +0000 (0:00:00.582) 0:09:29.684 ********** 2026-03-01 01:00:03.834910 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.834916 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.834922 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.834927 | orchestrator | 2026-03-01 01:00:03.834933 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-01 01:00:03.834939 | orchestrator | Sunday 01 March 2026 00:58:55 +0000 (0:00:00.282) 0:09:29.967 ********** 2026-03-01 01:00:03.834944 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834950 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834957 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.834963 | orchestrator | 2026-03-01 01:00:03.834970 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-01 01:00:03.834976 | orchestrator | Sunday 01 March 2026 00:58:55 +0000 (0:00:00.709) 0:09:30.676 ********** 2026-03-01 01:00:03.834983 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.834990 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.834996 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835003 | orchestrator | 2026-03-01 01:00:03.835009 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-01 01:00:03.835016 | orchestrator | Sunday 01 March 2026 00:58:56 +0000 (0:00:00.852) 0:09:31.528 ********** 2026-03-01 01:00:03.835020 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835024 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835027 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835031 | orchestrator | 2026-03-01 01:00:03.835035 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-01 01:00:03.835039 | orchestrator | Sunday 01 March 2026 00:58:57 +0000 (0:00:00.768) 0:09:32.296 ********** 2026-03-01 01:00:03.835043 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835046 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835050 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835054 | orchestrator | 2026-03-01 01:00:03.835058 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-01 01:00:03.835067 | orchestrator | Sunday 01 March 2026 00:58:57 +0000 (0:00:00.288) 0:09:32.585 ********** 2026-03-01 01:00:03.835074 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835080 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835092 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835098 | orchestrator | 2026-03-01 01:00:03.835105 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-01 01:00:03.835111 | orchestrator | Sunday 01 March 2026 00:58:58 +0000 (0:00:00.283) 0:09:32.868 ********** 2026-03-01 01:00:03.835117 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835121 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835127 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835132 | orchestrator | 2026-03-01 01:00:03.835138 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-01 01:00:03.835144 | orchestrator | Sunday 01 March 2026 00:58:58 +0000 (0:00:00.430) 0:09:33.298 ********** 2026-03-01 01:00:03.835151 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835157 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835163 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835170 | orchestrator | 2026-03-01 01:00:03.835176 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-01 01:00:03.835183 | orchestrator | Sunday 01 March 2026 00:58:59 +0000 (0:00:00.748) 0:09:34.047 ********** 2026-03-01 01:00:03.835189 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835196 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835200 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835204 | orchestrator | 2026-03-01 01:00:03.835207 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-01 01:00:03.835211 | orchestrator | Sunday 01 March 2026 00:59:00 +0000 (0:00:00.700) 0:09:34.747 ********** 2026-03-01 01:00:03.835215 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835219 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835223 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835226 | orchestrator | 2026-03-01 01:00:03.835230 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-01 01:00:03.835234 | orchestrator | Sunday 01 March 2026 00:59:00 +0000 (0:00:00.258) 0:09:35.006 ********** 2026-03-01 01:00:03.835238 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835241 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835245 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835249 | orchestrator | 2026-03-01 01:00:03.835253 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-01 01:00:03.835256 | orchestrator | Sunday 01 March 2026 00:59:00 +0000 (0:00:00.416) 0:09:35.422 ********** 2026-03-01 01:00:03.835260 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835264 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835268 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835272 | orchestrator | 2026-03-01 01:00:03.835275 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-01 01:00:03.835279 | orchestrator | Sunday 01 March 2026 00:59:00 +0000 (0:00:00.291) 0:09:35.713 ********** 2026-03-01 01:00:03.835283 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835287 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835334 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835347 | orchestrator | 2026-03-01 01:00:03.835351 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-01 01:00:03.835355 | orchestrator | Sunday 01 March 2026 00:59:01 +0000 (0:00:00.304) 0:09:36.018 ********** 2026-03-01 01:00:03.835359 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835362 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835366 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835370 | orchestrator | 2026-03-01 01:00:03.835374 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-01 01:00:03.835378 | orchestrator | Sunday 01 March 2026 00:59:01 +0000 (0:00:00.286) 0:09:36.304 ********** 2026-03-01 01:00:03.835381 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835385 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835389 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835393 | orchestrator | 2026-03-01 01:00:03.835396 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-01 01:00:03.835404 | orchestrator | Sunday 01 March 2026 00:59:01 +0000 (0:00:00.260) 0:09:36.565 ********** 2026-03-01 01:00:03.835407 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835411 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835415 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835419 | orchestrator | 2026-03-01 01:00:03.835422 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-01 01:00:03.835426 | orchestrator | Sunday 01 March 2026 00:59:02 +0000 (0:00:00.439) 0:09:37.004 ********** 2026-03-01 01:00:03.835430 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835434 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835438 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835444 | orchestrator | 2026-03-01 01:00:03.835450 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-01 01:00:03.835457 | orchestrator | Sunday 01 March 2026 00:59:02 +0000 (0:00:00.280) 0:09:37.284 ********** 2026-03-01 01:00:03.835462 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835468 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835474 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835480 | orchestrator | 2026-03-01 01:00:03.835486 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-01 01:00:03.835492 | orchestrator | Sunday 01 March 2026 00:59:02 +0000 (0:00:00.291) 0:09:37.576 ********** 2026-03-01 01:00:03.835498 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.835504 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.835509 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.835515 | orchestrator | 2026-03-01 01:00:03.835521 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-01 01:00:03.835527 | orchestrator | Sunday 01 March 2026 00:59:03 +0000 (0:00:00.633) 0:09:38.210 ********** 2026-03-01 01:00:03.835533 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.835540 | orchestrator | 2026-03-01 01:00:03.835546 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-01 01:00:03.835552 | orchestrator | Sunday 01 March 2026 00:59:03 +0000 (0:00:00.473) 0:09:38.684 ********** 2026-03-01 01:00:03.835565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835572 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-01 01:00:03.835576 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:00:03.835581 | orchestrator | 2026-03-01 01:00:03.835588 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-01 01:00:03.835594 | orchestrator | Sunday 01 March 2026 00:59:06 +0000 (0:00:02.161) 0:09:40.846 ********** 2026-03-01 01:00:03.835600 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 01:00:03.835606 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-01 01:00:03.835613 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.835619 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 01:00:03.835626 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-01 01:00:03.835632 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.835639 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 01:00:03.835645 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-01 01:00:03.835652 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.835659 | orchestrator | 2026-03-01 01:00:03.835663 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-01 01:00:03.835666 | orchestrator | Sunday 01 March 2026 00:59:07 +0000 (0:00:01.530) 0:09:42.376 ********** 2026-03-01 01:00:03.835670 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835674 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.835678 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.835681 | orchestrator | 2026-03-01 01:00:03.835685 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-01 01:00:03.835694 | orchestrator | Sunday 01 March 2026 00:59:07 +0000 (0:00:00.332) 0:09:42.709 ********** 2026-03-01 01:00:03.835698 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.835702 | orchestrator | 2026-03-01 01:00:03.835706 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-01 01:00:03.835709 | orchestrator | Sunday 01 March 2026 00:59:08 +0000 (0:00:00.528) 0:09:43.238 ********** 2026-03-01 01:00:03.835713 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.835721 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.835725 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.835729 | orchestrator | 2026-03-01 01:00:03.835736 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-01 01:00:03.835740 | orchestrator | Sunday 01 March 2026 00:59:09 +0000 (0:00:01.393) 0:09:44.631 ********** 2026-03-01 01:00:03.835744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835748 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-01 01:00:03.835752 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835755 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-01 01:00:03.835759 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835763 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-01 01:00:03.835767 | orchestrator | 2026-03-01 01:00:03.835770 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-01 01:00:03.835774 | orchestrator | Sunday 01 March 2026 00:59:15 +0000 (0:00:05.655) 0:09:50.287 ********** 2026-03-01 01:00:03.835778 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835782 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:00:03.835785 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835789 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:00:03.835793 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:00:03.835797 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:00:03.835814 | orchestrator | 2026-03-01 01:00:03.835820 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-01 01:00:03.835824 | orchestrator | Sunday 01 March 2026 00:59:17 +0000 (0:00:02.123) 0:09:52.410 ********** 2026-03-01 01:00:03.835827 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 01:00:03.835831 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.835835 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 01:00:03.835838 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.835842 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 01:00:03.835846 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.835850 | orchestrator | 2026-03-01 01:00:03.835853 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-01 01:00:03.835857 | orchestrator | Sunday 01 March 2026 00:59:18 +0000 (0:00:01.318) 0:09:53.729 ********** 2026-03-01 01:00:03.835867 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-01 01:00:03.835871 | orchestrator | 2026-03-01 01:00:03.835875 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-01 01:00:03.835879 | orchestrator | Sunday 01 March 2026 00:59:19 +0000 (0:00:00.251) 0:09:53.980 ********** 2026-03-01 01:00:03.835883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835907 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835913 | orchestrator | 2026-03-01 01:00:03.835920 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-01 01:00:03.835926 | orchestrator | Sunday 01 March 2026 00:59:20 +0000 (0:00:01.111) 0:09:55.092 ********** 2026-03-01 01:00:03.835931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-01 01:00:03.835965 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.835971 | orchestrator | 2026-03-01 01:00:03.835978 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-01 01:00:03.835982 | orchestrator | Sunday 01 March 2026 00:59:20 +0000 (0:00:00.593) 0:09:55.685 ********** 2026-03-01 01:00:03.835986 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-01 01:00:03.835990 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-01 01:00:03.835994 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-01 01:00:03.835998 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-01 01:00:03.836002 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-01 01:00:03.836005 | orchestrator | 2026-03-01 01:00:03.836009 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-01 01:00:03.836013 | orchestrator | Sunday 01 March 2026 00:59:51 +0000 (0:00:31.019) 0:10:26.705 ********** 2026-03-01 01:00:03.836017 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.836021 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.836024 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.836031 | orchestrator | 2026-03-01 01:00:03.836035 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-01 01:00:03.836039 | orchestrator | Sunday 01 March 2026 00:59:52 +0000 (0:00:00.257) 0:10:26.962 ********** 2026-03-01 01:00:03.836043 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.836046 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.836050 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.836054 | orchestrator | 2026-03-01 01:00:03.836058 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-01 01:00:03.836061 | orchestrator | Sunday 01 March 2026 00:59:52 +0000 (0:00:00.268) 0:10:27.231 ********** 2026-03-01 01:00:03.836065 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.836069 | orchestrator | 2026-03-01 01:00:03.836073 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-01 01:00:03.836077 | orchestrator | Sunday 01 March 2026 00:59:53 +0000 (0:00:00.630) 0:10:27.861 ********** 2026-03-01 01:00:03.836080 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.836084 | orchestrator | 2026-03-01 01:00:03.836088 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-01 01:00:03.836092 | orchestrator | Sunday 01 March 2026 00:59:53 +0000 (0:00:00.463) 0:10:28.325 ********** 2026-03-01 01:00:03.836099 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.836103 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.836106 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.836110 | orchestrator | 2026-03-01 01:00:03.836114 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-01 01:00:03.836118 | orchestrator | Sunday 01 March 2026 00:59:54 +0000 (0:00:01.227) 0:10:29.552 ********** 2026-03-01 01:00:03.836121 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.836125 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.836129 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.836133 | orchestrator | 2026-03-01 01:00:03.836136 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-01 01:00:03.836140 | orchestrator | Sunday 01 March 2026 00:59:56 +0000 (0:00:01.550) 0:10:31.103 ********** 2026-03-01 01:00:03.836144 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:00:03.836148 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:00:03.836151 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:00:03.836155 | orchestrator | 2026-03-01 01:00:03.836159 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-01 01:00:03.836163 | orchestrator | Sunday 01 March 2026 00:59:58 +0000 (0:00:01.900) 0:10:33.003 ********** 2026-03-01 01:00:03.836166 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.836170 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.836174 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-01 01:00:03.836178 | orchestrator | 2026-03-01 01:00:03.836181 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-01 01:00:03.836185 | orchestrator | Sunday 01 March 2026 01:00:00 +0000 (0:00:02.392) 0:10:35.395 ********** 2026-03-01 01:00:03.836189 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.836193 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.836196 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.836200 | orchestrator | 2026-03-01 01:00:03.836204 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-01 01:00:03.836211 | orchestrator | Sunday 01 March 2026 01:00:00 +0000 (0:00:00.301) 0:10:35.697 ********** 2026-03-01 01:00:03.836220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:00:03.836231 | orchestrator | 2026-03-01 01:00:03.836238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-01 01:00:03.836245 | orchestrator | Sunday 01 March 2026 01:00:01 +0000 (0:00:00.494) 0:10:36.191 ********** 2026-03-01 01:00:03.836252 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.836258 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.836265 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.836270 | orchestrator | 2026-03-01 01:00:03.836277 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-01 01:00:03.836282 | orchestrator | Sunday 01 March 2026 01:00:01 +0000 (0:00:00.444) 0:10:36.636 ********** 2026-03-01 01:00:03.836286 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.836289 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:00:03.836293 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:00:03.836297 | orchestrator | 2026-03-01 01:00:03.836301 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-01 01:00:03.836304 | orchestrator | Sunday 01 March 2026 01:00:02 +0000 (0:00:00.286) 0:10:36.922 ********** 2026-03-01 01:00:03.836308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:00:03.836312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:00:03.836316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:00:03.836319 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:00:03.836323 | orchestrator | 2026-03-01 01:00:03.836327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-01 01:00:03.836331 | orchestrator | Sunday 01 March 2026 01:00:02 +0000 (0:00:00.560) 0:10:37.483 ********** 2026-03-01 01:00:03.836334 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:00:03.836338 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:00:03.836342 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:00:03.836346 | orchestrator | 2026-03-01 01:00:03.836349 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:00:03.836353 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-01 01:00:03.836358 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-01 01:00:03.836361 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-01 01:00:03.836365 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-01 01:00:03.836369 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-01 01:00:03.836373 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-01 01:00:03.836376 | orchestrator | 2026-03-01 01:00:03.836380 | orchestrator | 2026-03-01 01:00:03.836384 | orchestrator | 2026-03-01 01:00:03.836390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:00:03.836394 | orchestrator | Sunday 01 March 2026 01:00:02 +0000 (0:00:00.238) 0:10:37.722 ********** 2026-03-01 01:00:03.836398 | orchestrator | =============================================================================== 2026-03-01 01:00:03.836402 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 46.18s 2026-03-01 01:00:03.836405 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.01s 2026-03-01 01:00:03.836409 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.02s 2026-03-01 01:00:03.836413 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.34s 2026-03-01 01:00:03.836419 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.20s 2026-03-01 01:00:03.836423 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.88s 2026-03-01 01:00:03.836427 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.10s 2026-03-01 01:00:03.836430 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.78s 2026-03-01 01:00:03.836434 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.16s 2026-03-01 01:00:03.836438 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.06s 2026-03-01 01:00:03.836442 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.98s 2026-03-01 01:00:03.836445 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.90s 2026-03-01 01:00:03.836449 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.66s 2026-03-01 01:00:03.836453 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.97s 2026-03-01 01:00:03.836456 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.11s 2026-03-01 01:00:03.836460 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.04s 2026-03-01 01:00:03.836464 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.78s 2026-03-01 01:00:03.836467 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.65s 2026-03-01 01:00:03.836473 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.30s 2026-03-01 01:00:03.836477 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.08s 2026-03-01 01:00:03.836480 | orchestrator | 2026-03-01 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:06.861115 | orchestrator | 2026-03-01 01:00:06 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:06.862754 | orchestrator | 2026-03-01 01:00:06 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:06.865504 | orchestrator | 2026-03-01 01:00:06 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:06.865537 | orchestrator | 2026-03-01 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:09.903496 | orchestrator | 2026-03-01 01:00:09 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:09.904326 | orchestrator | 2026-03-01 01:00:09 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:09.905100 | orchestrator | 2026-03-01 01:00:09 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:09.905405 | orchestrator | 2026-03-01 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:12.949095 | orchestrator | 2026-03-01 01:00:12 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:12.950357 | orchestrator | 2026-03-01 01:00:12 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:12.951586 | orchestrator | 2026-03-01 01:00:12 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:12.951615 | orchestrator | 2026-03-01 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:15.988186 | orchestrator | 2026-03-01 01:00:15 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:15.989942 | orchestrator | 2026-03-01 01:00:15 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:15.991914 | orchestrator | 2026-03-01 01:00:15 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:15.992052 | orchestrator | 2026-03-01 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:19.034269 | orchestrator | 2026-03-01 01:00:19 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:19.034483 | orchestrator | 2026-03-01 01:00:19 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:19.036263 | orchestrator | 2026-03-01 01:00:19 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:19.036321 | orchestrator | 2026-03-01 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:22.067175 | orchestrator | 2026-03-01 01:00:22 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:22.068720 | orchestrator | 2026-03-01 01:00:22 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:22.071631 | orchestrator | 2026-03-01 01:00:22 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:22.071818 | orchestrator | 2026-03-01 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:25.107850 | orchestrator | 2026-03-01 01:00:25 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:25.108284 | orchestrator | 2026-03-01 01:00:25 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:25.110207 | orchestrator | 2026-03-01 01:00:25 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:25.110254 | orchestrator | 2026-03-01 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:28.154991 | orchestrator | 2026-03-01 01:00:28 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:28.156989 | orchestrator | 2026-03-01 01:00:28 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:28.157941 | orchestrator | 2026-03-01 01:00:28 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:28.158003 | orchestrator | 2026-03-01 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:31.211035 | orchestrator | 2026-03-01 01:00:31 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:31.213729 | orchestrator | 2026-03-01 01:00:31 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:31.217481 | orchestrator | 2026-03-01 01:00:31 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:31.217564 | orchestrator | 2026-03-01 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:34.259464 | orchestrator | 2026-03-01 01:00:34 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:34.261225 | orchestrator | 2026-03-01 01:00:34 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:34.264539 | orchestrator | 2026-03-01 01:00:34 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:34.264888 | orchestrator | 2026-03-01 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:37.302059 | orchestrator | 2026-03-01 01:00:37 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:37.305278 | orchestrator | 2026-03-01 01:00:37 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:37.307645 | orchestrator | 2026-03-01 01:00:37 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:37.307706 | orchestrator | 2026-03-01 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:40.352640 | orchestrator | 2026-03-01 01:00:40 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:40.354147 | orchestrator | 2026-03-01 01:00:40 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:40.354922 | orchestrator | 2026-03-01 01:00:40 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:40.355306 | orchestrator | 2026-03-01 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:43.407064 | orchestrator | 2026-03-01 01:00:43 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:43.408580 | orchestrator | 2026-03-01 01:00:43 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:43.410103 | orchestrator | 2026-03-01 01:00:43 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:43.410149 | orchestrator | 2026-03-01 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:46.445015 | orchestrator | 2026-03-01 01:00:46 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:46.446352 | orchestrator | 2026-03-01 01:00:46 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:46.448180 | orchestrator | 2026-03-01 01:00:46 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:46.448223 | orchestrator | 2026-03-01 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:49.489605 | orchestrator | 2026-03-01 01:00:49 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:49.491875 | orchestrator | 2026-03-01 01:00:49 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:49.494965 | orchestrator | 2026-03-01 01:00:49 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state STARTED 2026-03-01 01:00:49.495269 | orchestrator | 2026-03-01 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:52.532535 | orchestrator | 2026-03-01 01:00:52 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:52.534424 | orchestrator | 2026-03-01 01:00:52 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:52.536700 | orchestrator | 2026-03-01 01:00:52 | INFO  | Task 77e7c2da-ffe5-4a55-a7c4-9f4bc974d2a5 is in state SUCCESS 2026-03-01 01:00:52.538861 | orchestrator | 2026-03-01 01:00:52.538898 | orchestrator | 2026-03-01 01:00:52.538907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:00:52.538914 | orchestrator | 2026-03-01 01:00:52.538921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:00:52.538928 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-01 01:00:52.538934 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:52.538942 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:52.539018 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:52.539026 | orchestrator | 2026-03-01 01:00:52.539030 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:00:52.539034 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.285) 0:00:00.545 ********** 2026-03-01 01:00:52.539038 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-01 01:00:52.539042 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-01 01:00:52.539046 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-01 01:00:52.539050 | orchestrator | 2026-03-01 01:00:52.539054 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-01 01:00:52.539063 | orchestrator | 2026-03-01 01:00:52.539091 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-01 01:00:52.539096 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.439) 0:00:00.984 ********** 2026-03-01 01:00:52.539100 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:52.539104 | orchestrator | 2026-03-01 01:00:52.539107 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-01 01:00:52.539111 | orchestrator | Sunday 01 March 2026 00:58:07 +0000 (0:00:00.511) 0:00:01.496 ********** 2026-03-01 01:00:52.539115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-01 01:00:52.539119 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-01 01:00:52.539123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-01 01:00:52.539126 | orchestrator | 2026-03-01 01:00:52.539130 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-01 01:00:52.539134 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:01.639) 0:00:03.136 ********** 2026-03-01 01:00:52.539139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539179 | orchestrator | 2026-03-01 01:00:52.539183 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-01 01:00:52.539187 | orchestrator | Sunday 01 March 2026 00:58:11 +0000 (0:00:02.168) 0:00:05.304 ********** 2026-03-01 01:00:52.539191 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:52.539195 | orchestrator | 2026-03-01 01:00:52.539198 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-01 01:00:52.539202 | orchestrator | Sunday 01 March 2026 00:58:11 +0000 (0:00:00.533) 0:00:05.837 ********** 2026-03-01 01:00:52.539211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539263 | orchestrator | 2026-03-01 01:00:52.539269 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-01 01:00:52.539276 | orchestrator | Sunday 01 March 2026 00:58:14 +0000 (0:00:03.135) 0:00:08.973 ********** 2026-03-01 01:00:52.539282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 01:00:52.539289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 01:00:52.539295 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:52.539301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 01:00:52.539318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 01:00:52.539326 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:52.539332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 01:00:52.539339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 01:00:52.539346 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:52.539352 | orchestrator | 2026-03-01 01:00:52.539356 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-01 01:00:52.539360 | orchestrator | Sunday 01 March 2026 00:58:16 +0000 (0:00:01.295) 0:00:10.269 ********** 2026-03-01 01:00:52.539364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 01:00:52.539382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 01:00:52.539386 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:52.539390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 01:00:52.539395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 01:00:52.539399 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:52.539402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-01 01:00:52.539413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-01 01:00:52.539417 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:52.539421 | orchestrator | 2026-03-01 01:00:52.539426 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-01 01:00:52.539430 | orchestrator | Sunday 01 March 2026 00:58:16 +0000 (0:00:00.713) 0:00:10.982 ********** 2026-03-01 01:00:52.539434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539468 | orchestrator | 2026-03-01 01:00:52.539472 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-01 01:00:52.539476 | orchestrator | Sunday 01 March 2026 00:58:19 +0000 (0:00:02.583) 0:00:13.566 ********** 2026-03-01 01:00:52.539480 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:52.539484 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:52.539487 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:52.539491 | orchestrator | 2026-03-01 01:00:52.539495 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-01 01:00:52.539501 | orchestrator | Sunday 01 March 2026 00:58:21 +0000 (0:00:02.308) 0:00:15.874 ********** 2026-03-01 01:00:52.539505 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:52.539509 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:52.539513 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:52.539516 | orchestrator | 2026-03-01 01:00:52.539520 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-01 01:00:52.539524 | orchestrator | Sunday 01 March 2026 00:58:23 +0000 (0:00:02.097) 0:00:17.972 ********** 2026-03-01 01:00:52.539528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-01 01:00:52.539545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-01 01:00:52.539563 | orchestrator | 2026-03-01 01:00:52.539567 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-01 01:00:52.539573 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:02.134) 0:00:20.107 ********** 2026-03-01 01:00:52.539576 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:52.539580 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:52.539584 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:52.539588 | orchestrator | 2026-03-01 01:00:52.539592 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-01 01:00:52.539595 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:00.287) 0:00:20.394 ********** 2026-03-01 01:00:52.539599 | orchestrator | 2026-03-01 01:00:52.539603 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-01 01:00:52.539607 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:00.064) 0:00:20.458 ********** 2026-03-01 01:00:52.539611 | orchestrator | 2026-03-01 01:00:52.539614 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-01 01:00:52.539618 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:00.062) 0:00:20.521 ********** 2026-03-01 01:00:52.539622 | orchestrator | 2026-03-01 01:00:52.539626 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-01 01:00:52.539629 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:00.066) 0:00:20.587 ********** 2026-03-01 01:00:52.539633 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:52.539637 | orchestrator | 2026-03-01 01:00:52.539641 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-01 01:00:52.539645 | orchestrator | Sunday 01 March 2026 00:58:27 +0000 (0:00:00.892) 0:00:21.480 ********** 2026-03-01 01:00:52.539651 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:52.539655 | orchestrator | 2026-03-01 01:00:52.539660 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-01 01:00:52.539665 | orchestrator | Sunday 01 March 2026 00:58:27 +0000 (0:00:00.191) 0:00:21.671 ********** 2026-03-01 01:00:52.539669 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:52.539674 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:52.539678 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:52.539683 | orchestrator | 2026-03-01 01:00:52.539687 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-01 01:00:52.539692 | orchestrator | Sunday 01 March 2026 00:59:33 +0000 (0:01:06.238) 0:01:27.910 ********** 2026-03-01 01:00:52.539696 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:52.539701 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:52.539705 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:52.539710 | orchestrator | 2026-03-01 01:00:52.539714 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-01 01:00:52.539719 | orchestrator | Sunday 01 March 2026 01:00:40 +0000 (0:01:06.212) 0:02:34.123 ********** 2026-03-01 01:00:52.539724 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:52.539728 | orchestrator | 2026-03-01 01:00:52.539748 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-01 01:00:52.539758 | orchestrator | Sunday 01 March 2026 01:00:40 +0000 (0:00:00.596) 0:02:34.719 ********** 2026-03-01 01:00:52.539765 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:52.539772 | orchestrator | 2026-03-01 01:00:52.539778 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-01 01:00:52.539784 | orchestrator | Sunday 01 March 2026 01:00:43 +0000 (0:00:02.505) 0:02:37.225 ********** 2026-03-01 01:00:52.539790 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:52.539796 | orchestrator | 2026-03-01 01:00:52.539802 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-01 01:00:52.539808 | orchestrator | Sunday 01 March 2026 01:00:45 +0000 (0:00:02.102) 0:02:39.327 ********** 2026-03-01 01:00:52.539815 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:52.539821 | orchestrator | 2026-03-01 01:00:52.539827 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-01 01:00:52.539833 | orchestrator | Sunday 01 March 2026 01:00:47 +0000 (0:00:02.233) 0:02:41.561 ********** 2026-03-01 01:00:52.539839 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:52.539845 | orchestrator | 2026-03-01 01:00:52.539851 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-01 01:00:52.539857 | orchestrator | Sunday 01 March 2026 01:00:50 +0000 (0:00:02.570) 0:02:44.132 ********** 2026-03-01 01:00:52.539863 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:52.539869 | orchestrator | 2026-03-01 01:00:52.539874 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:00:52.539882 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:00:52.539889 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 01:00:52.539901 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-01 01:00:52.539908 | orchestrator | 2026-03-01 01:00:52.539914 | orchestrator | 2026-03-01 01:00:52.539920 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:00:52.539927 | orchestrator | Sunday 01 March 2026 01:00:52 +0000 (0:00:02.171) 0:02:46.303 ********** 2026-03-01 01:00:52.539933 | orchestrator | =============================================================================== 2026-03-01 01:00:52.539939 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.24s 2026-03-01 01:00:52.539950 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 66.21s 2026-03-01 01:00:52.539956 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.14s 2026-03-01 01:00:52.539963 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.58s 2026-03-01 01:00:52.539970 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.57s 2026-03-01 01:00:52.539980 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.51s 2026-03-01 01:00:52.539987 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.31s 2026-03-01 01:00:52.539992 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.23s 2026-03-01 01:00:52.540000 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.17s 2026-03-01 01:00:52.540006 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.17s 2026-03-01 01:00:52.540013 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.13s 2026-03-01 01:00:52.540019 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.10s 2026-03-01 01:00:52.540025 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.10s 2026-03-01 01:00:52.540032 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.64s 2026-03-01 01:00:52.540038 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.30s 2026-03-01 01:00:52.540044 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.89s 2026-03-01 01:00:52.540050 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.71s 2026-03-01 01:00:52.540057 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2026-03-01 01:00:52.540063 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-03-01 01:00:52.540068 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-03-01 01:00:52.540075 | orchestrator | 2026-03-01 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:55.576919 | orchestrator | 2026-03-01 01:00:55 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state STARTED 2026-03-01 01:00:55.578332 | orchestrator | 2026-03-01 01:00:55 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:55.578378 | orchestrator | 2026-03-01 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:00:58.619356 | orchestrator | 2026-03-01 01:00:58 | INFO  | Task e810adcb-6f30-4966-bfc0-72f81cbb2b87 is in state SUCCESS 2026-03-01 01:00:58.620138 | orchestrator | 2026-03-01 01:00:58.620186 | orchestrator | 2026-03-01 01:00:58.620193 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-01 01:00:58.620199 | orchestrator | 2026-03-01 01:00:58.620203 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-01 01:00:58.620208 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.087) 0:00:00.087 ********** 2026-03-01 01:00:58.620212 | orchestrator | ok: [localhost] => { 2026-03-01 01:00:58.620218 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-01 01:00:58.620223 | orchestrator | } 2026-03-01 01:00:58.620227 | orchestrator | 2026-03-01 01:00:58.620231 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-01 01:00:58.620235 | orchestrator | Sunday 01 March 2026 00:58:06 +0000 (0:00:00.051) 0:00:00.139 ********** 2026-03-01 01:00:58.620239 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-01 01:00:58.620245 | orchestrator | ...ignoring 2026-03-01 01:00:58.620249 | orchestrator | 2026-03-01 01:00:58.620253 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-01 01:00:58.620273 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:02.917) 0:00:03.057 ********** 2026-03-01 01:00:58.620277 | orchestrator | skipping: [localhost] 2026-03-01 01:00:58.620281 | orchestrator | 2026-03-01 01:00:58.620285 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-01 01:00:58.620289 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:00.070) 0:00:03.128 ********** 2026-03-01 01:00:58.620293 | orchestrator | ok: [localhost] 2026-03-01 01:00:58.620297 | orchestrator | 2026-03-01 01:00:58.620301 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:00:58.620305 | orchestrator | 2026-03-01 01:00:58.620309 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:00:58.620313 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:00.227) 0:00:03.356 ********** 2026-03-01 01:00:58.620317 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.620321 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.620325 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.620329 | orchestrator | 2026-03-01 01:00:58.620332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:00:58.620337 | orchestrator | Sunday 01 March 2026 00:58:09 +0000 (0:00:00.386) 0:00:03.742 ********** 2026-03-01 01:00:58.620340 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-01 01:00:58.620345 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-01 01:00:58.620349 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-01 01:00:58.620353 | orchestrator | 2026-03-01 01:00:58.620357 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-01 01:00:58.620361 | orchestrator | 2026-03-01 01:00:58.620365 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-01 01:00:58.620369 | orchestrator | Sunday 01 March 2026 00:58:10 +0000 (0:00:00.690) 0:00:04.432 ********** 2026-03-01 01:00:58.620373 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-01 01:00:58.620377 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-01 01:00:58.620381 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-01 01:00:58.620385 | orchestrator | 2026-03-01 01:00:58.620389 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-01 01:00:58.620402 | orchestrator | Sunday 01 March 2026 00:58:10 +0000 (0:00:00.385) 0:00:04.818 ********** 2026-03-01 01:00:58.620409 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:58.620416 | orchestrator | 2026-03-01 01:00:58.620423 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-01 01:00:58.620432 | orchestrator | Sunday 01 March 2026 00:58:11 +0000 (0:00:00.515) 0:00:05.334 ********** 2026-03-01 01:00:58.620456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.620473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.620495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.620825 | orchestrator | 2026-03-01 01:00:58.620847 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-01 01:00:58.620852 | orchestrator | Sunday 01 March 2026 00:58:14 +0000 (0:00:03.324) 0:00:08.658 ********** 2026-03-01 01:00:58.620856 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.620860 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.620864 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.620868 | orchestrator | 2026-03-01 01:00:58.620872 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-01 01:00:58.620876 | orchestrator | Sunday 01 March 2026 00:58:15 +0000 (0:00:00.780) 0:00:09.439 ********** 2026-03-01 01:00:58.620879 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.620883 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.620887 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.620890 | orchestrator | 2026-03-01 01:00:58.620894 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-01 01:00:58.620898 | orchestrator | Sunday 01 March 2026 00:58:17 +0000 (0:00:01.630) 0:00:11.069 ********** 2026-03-01 01:00:58.620908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.620918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.620927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.620932 | orchestrator | 2026-03-01 01:00:58.620936 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-01 01:00:58.620939 | orchestrator | Sunday 01 March 2026 00:58:20 +0000 (0:00:03.097) 0:00:14.167 ********** 2026-03-01 01:00:58.620943 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.620947 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.620951 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.620955 | orchestrator | 2026-03-01 01:00:58.620959 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-01 01:00:58.620965 | orchestrator | Sunday 01 March 2026 00:58:21 +0000 (0:00:01.182) 0:00:15.350 ********** 2026-03-01 01:00:58.620969 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:58.620973 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:58.620977 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.620981 | orchestrator | 2026-03-01 01:00:58.620984 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-01 01:00:58.620988 | orchestrator | Sunday 01 March 2026 00:58:25 +0000 (0:00:04.138) 0:00:19.488 ********** 2026-03-01 01:00:58.620993 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:58.621000 | orchestrator | 2026-03-01 01:00:58.621004 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-01 01:00:58.621008 | orchestrator | Sunday 01 March 2026 00:58:26 +0000 (0:00:00.497) 0:00:19.985 ********** 2026-03-01 01:00:58.621017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621021 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621045 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621061 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621065 | orchestrator | 2026-03-01 01:00:58.621069 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-01 01:00:58.621073 | orchestrator | Sunday 01 March 2026 00:58:28 +0000 (0:00:02.763) 0:00:22.749 ********** 2026-03-01 01:00:58.621077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621084 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621098 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621106 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621110 | orchestrator | 2026-03-01 01:00:58.621114 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-01 01:00:58.621118 | orchestrator | Sunday 01 March 2026 00:58:33 +0000 (0:00:04.322) 0:00:27.072 ********** 2026-03-01 01:00:58.621124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621135 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621147 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-01 01:00:58.621161 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621165 | orchestrator | 2026-03-01 01:00:58.621169 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-01 01:00:58.621172 | orchestrator | Sunday 01 March 2026 00:58:36 +0000 (0:00:02.963) 0:00:30.035 ********** 2026-03-01 01:00:58.621181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.621188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.621199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-01 01:00:58.621203 | orchestrator | 2026-03-01 01:00:58.621208 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-01 01:00:58.621212 | orchestrator | Sunday 01 March 2026 00:58:40 +0000 (0:00:04.060) 0:00:34.096 ********** 2026-03-01 01:00:58.621215 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.621219 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:58.621223 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:58.621227 | orchestrator | 2026-03-01 01:00:58.621230 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-01 01:00:58.621234 | orchestrator | Sunday 01 March 2026 00:58:41 +0000 (0:00:01.053) 0:00:35.150 ********** 2026-03-01 01:00:58.621238 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621242 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.621249 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.621253 | orchestrator | 2026-03-01 01:00:58.621256 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-01 01:00:58.621260 | orchestrator | Sunday 01 March 2026 00:58:41 +0000 (0:00:00.338) 0:00:35.489 ********** 2026-03-01 01:00:58.621264 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621268 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.621271 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.621275 | orchestrator | 2026-03-01 01:00:58.621279 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-01 01:00:58.621283 | orchestrator | Sunday 01 March 2026 00:58:41 +0000 (0:00:00.341) 0:00:35.830 ********** 2026-03-01 01:00:58.621287 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-01 01:00:58.621292 | orchestrator | ...ignoring 2026-03-01 01:00:58.621296 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-01 01:00:58.621302 | orchestrator | ...ignoring 2026-03-01 01:00:58.621306 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-01 01:00:58.621310 | orchestrator | ...ignoring 2026-03-01 01:00:58.621314 | orchestrator | 2026-03-01 01:00:58.621317 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-01 01:00:58.621321 | orchestrator | Sunday 01 March 2026 00:58:52 +0000 (0:00:10.853) 0:00:46.684 ********** 2026-03-01 01:00:58.621325 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621329 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.621333 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.621336 | orchestrator | 2026-03-01 01:00:58.621340 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-01 01:00:58.621344 | orchestrator | Sunday 01 March 2026 00:58:53 +0000 (0:00:00.396) 0:00:47.081 ********** 2026-03-01 01:00:58.621348 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621352 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621355 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621359 | orchestrator | 2026-03-01 01:00:58.621363 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-01 01:00:58.621367 | orchestrator | Sunday 01 March 2026 00:58:53 +0000 (0:00:00.518) 0:00:47.600 ********** 2026-03-01 01:00:58.621370 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621374 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621378 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621382 | orchestrator | 2026-03-01 01:00:58.621385 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-01 01:00:58.621389 | orchestrator | Sunday 01 March 2026 00:58:54 +0000 (0:00:00.383) 0:00:47.983 ********** 2026-03-01 01:00:58.621393 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621397 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621401 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621404 | orchestrator | 2026-03-01 01:00:58.621408 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-01 01:00:58.621412 | orchestrator | Sunday 01 March 2026 00:58:54 +0000 (0:00:00.449) 0:00:48.432 ********** 2026-03-01 01:00:58.621416 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621420 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.621424 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.621427 | orchestrator | 2026-03-01 01:00:58.621473 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-01 01:00:58.621479 | orchestrator | Sunday 01 March 2026 00:58:54 +0000 (0:00:00.381) 0:00:48.814 ********** 2026-03-01 01:00:58.621486 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621490 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621498 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621502 | orchestrator | 2026-03-01 01:00:58.621506 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-01 01:00:58.621510 | orchestrator | Sunday 01 March 2026 00:58:55 +0000 (0:00:00.591) 0:00:49.406 ********** 2026-03-01 01:00:58.621514 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621517 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621521 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-01 01:00:58.621525 | orchestrator | 2026-03-01 01:00:58.621529 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-01 01:00:58.621533 | orchestrator | Sunday 01 March 2026 00:58:55 +0000 (0:00:00.379) 0:00:49.785 ********** 2026-03-01 01:00:58.621537 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.621541 | orchestrator | 2026-03-01 01:00:58.621544 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-01 01:00:58.621548 | orchestrator | Sunday 01 March 2026 00:59:05 +0000 (0:00:09.676) 0:00:59.461 ********** 2026-03-01 01:00:58.621592 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621596 | orchestrator | 2026-03-01 01:00:58.621600 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-01 01:00:58.621604 | orchestrator | Sunday 01 March 2026 00:59:05 +0000 (0:00:00.133) 0:00:59.595 ********** 2026-03-01 01:00:58.621608 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621611 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621615 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621619 | orchestrator | 2026-03-01 01:00:58.621623 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-01 01:00:58.621627 | orchestrator | Sunday 01 March 2026 00:59:06 +0000 (0:00:00.962) 0:01:00.558 ********** 2026-03-01 01:00:58.621630 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.621634 | orchestrator | 2026-03-01 01:00:58.621638 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-01 01:00:58.621642 | orchestrator | Sunday 01 March 2026 00:59:14 +0000 (0:00:07.796) 0:01:08.355 ********** 2026-03-01 01:00:58.621646 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621650 | orchestrator | 2026-03-01 01:00:58.621654 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-01 01:00:58.621657 | orchestrator | Sunday 01 March 2026 00:59:16 +0000 (0:00:01.685) 0:01:10.041 ********** 2026-03-01 01:00:58.621661 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621665 | orchestrator | 2026-03-01 01:00:58.621669 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-01 01:00:58.621673 | orchestrator | Sunday 01 March 2026 00:59:18 +0000 (0:00:02.635) 0:01:12.676 ********** 2026-03-01 01:00:58.621677 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.621681 | orchestrator | 2026-03-01 01:00:58.621685 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-01 01:00:58.621688 | orchestrator | Sunday 01 March 2026 00:59:18 +0000 (0:00:00.171) 0:01:12.848 ********** 2026-03-01 01:00:58.621692 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621696 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.621700 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.621704 | orchestrator | 2026-03-01 01:00:58.621708 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-01 01:00:58.621712 | orchestrator | Sunday 01 March 2026 00:59:19 +0000 (0:00:00.340) 0:01:13.189 ********** 2026-03-01 01:00:58.621718 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.621722 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:58.621776 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:58.621780 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-01 01:00:58.621784 | orchestrator | 2026-03-01 01:00:58.621788 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-01 01:00:58.621792 | orchestrator | skipping: no hosts matched 2026-03-01 01:00:58.621800 | orchestrator | 2026-03-01 01:00:58.621804 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-01 01:00:58.621808 | orchestrator | 2026-03-01 01:00:58.621812 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-01 01:00:58.621816 | orchestrator | Sunday 01 March 2026 00:59:19 +0000 (0:00:00.534) 0:01:13.723 ********** 2026-03-01 01:00:58.621821 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:00:58.621825 | orchestrator | 2026-03-01 01:00:58.621828 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-01 01:00:58.621832 | orchestrator | Sunday 01 March 2026 00:59:35 +0000 (0:00:15.798) 0:01:29.522 ********** 2026-03-01 01:00:58.621836 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.621840 | orchestrator | 2026-03-01 01:00:58.621844 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-01 01:00:58.621848 | orchestrator | Sunday 01 March 2026 00:59:51 +0000 (0:00:15.547) 0:01:45.069 ********** 2026-03-01 01:00:58.621852 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.621855 | orchestrator | 2026-03-01 01:00:58.621860 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-01 01:00:58.621864 | orchestrator | 2026-03-01 01:00:58.621867 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-01 01:00:58.621871 | orchestrator | Sunday 01 March 2026 00:59:53 +0000 (0:00:02.185) 0:01:47.254 ********** 2026-03-01 01:00:58.621875 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:00:58.621879 | orchestrator | 2026-03-01 01:00:58.621883 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-01 01:00:58.621886 | orchestrator | Sunday 01 March 2026 01:00:08 +0000 (0:00:15.140) 0:02:02.395 ********** 2026-03-01 01:00:58.621890 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.621894 | orchestrator | 2026-03-01 01:00:58.621898 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-01 01:00:58.621902 | orchestrator | Sunday 01 March 2026 01:00:23 +0000 (0:00:15.511) 0:02:17.906 ********** 2026-03-01 01:00:58.621905 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.621909 | orchestrator | 2026-03-01 01:00:58.621913 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-01 01:00:58.621917 | orchestrator | 2026-03-01 01:00:58.621924 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-01 01:00:58.621928 | orchestrator | Sunday 01 March 2026 01:00:26 +0000 (0:00:02.280) 0:02:20.187 ********** 2026-03-01 01:00:58.621932 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.621936 | orchestrator | 2026-03-01 01:00:58.621940 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-01 01:00:58.621943 | orchestrator | Sunday 01 March 2026 01:00:37 +0000 (0:00:11.070) 0:02:31.258 ********** 2026-03-01 01:00:58.621947 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621951 | orchestrator | 2026-03-01 01:00:58.621955 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-01 01:00:58.621959 | orchestrator | Sunday 01 March 2026 01:00:41 +0000 (0:00:04.627) 0:02:35.885 ********** 2026-03-01 01:00:58.621962 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.621966 | orchestrator | 2026-03-01 01:00:58.621970 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-01 01:00:58.621974 | orchestrator | 2026-03-01 01:00:58.621978 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-01 01:00:58.621981 | orchestrator | Sunday 01 March 2026 01:00:44 +0000 (0:00:02.396) 0:02:38.282 ********** 2026-03-01 01:00:58.621985 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:00:58.621989 | orchestrator | 2026-03-01 01:00:58.621993 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-01 01:00:58.621996 | orchestrator | Sunday 01 March 2026 01:00:44 +0000 (0:00:00.496) 0:02:38.778 ********** 2026-03-01 01:00:58.622000 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.622004 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.622011 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.622056 | orchestrator | 2026-03-01 01:00:58.622060 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-01 01:00:58.622064 | orchestrator | Sunday 01 March 2026 01:00:47 +0000 (0:00:02.367) 0:02:41.145 ********** 2026-03-01 01:00:58.622068 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.622072 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.622075 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.622079 | orchestrator | 2026-03-01 01:00:58.622083 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-01 01:00:58.622087 | orchestrator | Sunday 01 March 2026 01:00:49 +0000 (0:00:01.875) 0:02:43.021 ********** 2026-03-01 01:00:58.622108 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.622113 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.622116 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.622120 | orchestrator | 2026-03-01 01:00:58.622124 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-01 01:00:58.622128 | orchestrator | Sunday 01 March 2026 01:00:50 +0000 (0:00:01.891) 0:02:44.912 ********** 2026-03-01 01:00:58.622132 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.622139 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.622146 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:00:58.622152 | orchestrator | 2026-03-01 01:00:58.622157 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-01 01:00:58.622166 | orchestrator | Sunday 01 March 2026 01:00:52 +0000 (0:00:01.906) 0:02:46.819 ********** 2026-03-01 01:00:58.622175 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:00:58.622183 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:00:58.622190 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:00:58.622196 | orchestrator | 2026-03-01 01:00:58.622203 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-01 01:00:58.622213 | orchestrator | Sunday 01 March 2026 01:00:55 +0000 (0:00:02.966) 0:02:49.786 ********** 2026-03-01 01:00:58.622220 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:00:58.622226 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:00:58.622233 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:00:58.622239 | orchestrator | 2026-03-01 01:00:58.622244 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:00:58.622251 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-01 01:00:58.622258 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-01 01:00:58.622266 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-01 01:00:58.622272 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-01 01:00:58.622279 | orchestrator | 2026-03-01 01:00:58.622286 | orchestrator | 2026-03-01 01:00:58.622293 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:00:58.622301 | orchestrator | Sunday 01 March 2026 01:00:56 +0000 (0:00:00.201) 0:02:49.987 ********** 2026-03-01 01:00:58.622306 | orchestrator | =============================================================================== 2026-03-01 01:00:58.622310 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.06s 2026-03-01 01:00:58.622313 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 30.94s 2026-03-01 01:00:58.622317 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.07s 2026-03-01 01:00:58.622321 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.85s 2026-03-01 01:00:58.622329 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.68s 2026-03-01 01:00:58.622333 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.80s 2026-03-01 01:00:58.622341 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.63s 2026-03-01 01:00:58.622345 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.47s 2026-03-01 01:00:58.622349 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.32s 2026-03-01 01:00:58.622353 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.14s 2026-03-01 01:00:58.622357 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.06s 2026-03-01 01:00:58.622361 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.32s 2026-03-01 01:00:58.622364 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.10s 2026-03-01 01:00:58.622368 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.97s 2026-03-01 01:00:58.622372 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.96s 2026-03-01 01:00:58.622376 | orchestrator | Check MariaDB service --------------------------------------------------- 2.92s 2026-03-01 01:00:58.622379 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.76s 2026-03-01 01:00:58.622383 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.64s 2026-03-01 01:00:58.622387 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.40s 2026-03-01 01:00:58.622391 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2026-03-01 01:00:58.622394 | orchestrator | 2026-03-01 01:00:58 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:00:58.622399 | orchestrator | 2026-03-01 01:00:58 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:00:58.622402 | orchestrator | 2026-03-01 01:00:58 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:00:58.622406 | orchestrator | 2026-03-01 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:01.661807 | orchestrator | 2026-03-01 01:01:01 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:01.662925 | orchestrator | 2026-03-01 01:01:01 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:01.664106 | orchestrator | 2026-03-01 01:01:01 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:01.664142 | orchestrator | 2026-03-01 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:04.698392 | orchestrator | 2026-03-01 01:01:04 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:04.698995 | orchestrator | 2026-03-01 01:01:04 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:04.700772 | orchestrator | 2026-03-01 01:01:04 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:04.700824 | orchestrator | 2026-03-01 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:07.742277 | orchestrator | 2026-03-01 01:01:07 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:07.745455 | orchestrator | 2026-03-01 01:01:07 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:07.747324 | orchestrator | 2026-03-01 01:01:07 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:07.747413 | orchestrator | 2026-03-01 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:10.782201 | orchestrator | 2026-03-01 01:01:10 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:10.783210 | orchestrator | 2026-03-01 01:01:10 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:10.784012 | orchestrator | 2026-03-01 01:01:10 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:10.784340 | orchestrator | 2026-03-01 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:13.812639 | orchestrator | 2026-03-01 01:01:13 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:13.815543 | orchestrator | 2026-03-01 01:01:13 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:13.817105 | orchestrator | 2026-03-01 01:01:13 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:13.817325 | orchestrator | 2026-03-01 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:16.857160 | orchestrator | 2026-03-01 01:01:16 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:16.857641 | orchestrator | 2026-03-01 01:01:16 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:16.858403 | orchestrator | 2026-03-01 01:01:16 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:16.858431 | orchestrator | 2026-03-01 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:19.898219 | orchestrator | 2026-03-01 01:01:19 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:19.900256 | orchestrator | 2026-03-01 01:01:19 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:19.901387 | orchestrator | 2026-03-01 01:01:19 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:19.901419 | orchestrator | 2026-03-01 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:22.940966 | orchestrator | 2026-03-01 01:01:22 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:22.941027 | orchestrator | 2026-03-01 01:01:22 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:22.942213 | orchestrator | 2026-03-01 01:01:22 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:22.942258 | orchestrator | 2026-03-01 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:25.992684 | orchestrator | 2026-03-01 01:01:25 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:25.994588 | orchestrator | 2026-03-01 01:01:25 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:25.996680 | orchestrator | 2026-03-01 01:01:25 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:25.996849 | orchestrator | 2026-03-01 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:29.043315 | orchestrator | 2026-03-01 01:01:29 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:29.043382 | orchestrator | 2026-03-01 01:01:29 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:29.045947 | orchestrator | 2026-03-01 01:01:29 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:29.046001 | orchestrator | 2026-03-01 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:32.103191 | orchestrator | 2026-03-01 01:01:32 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:32.106135 | orchestrator | 2026-03-01 01:01:32 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:32.107565 | orchestrator | 2026-03-01 01:01:32 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:32.107638 | orchestrator | 2026-03-01 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:35.148180 | orchestrator | 2026-03-01 01:01:35 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:35.150759 | orchestrator | 2026-03-01 01:01:35 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:35.154389 | orchestrator | 2026-03-01 01:01:35 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:35.154467 | orchestrator | 2026-03-01 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:38.198651 | orchestrator | 2026-03-01 01:01:38 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:38.200028 | orchestrator | 2026-03-01 01:01:38 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:38.202268 | orchestrator | 2026-03-01 01:01:38 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:38.202315 | orchestrator | 2026-03-01 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:41.245139 | orchestrator | 2026-03-01 01:01:41 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:41.246906 | orchestrator | 2026-03-01 01:01:41 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:41.248570 | orchestrator | 2026-03-01 01:01:41 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:41.248614 | orchestrator | 2026-03-01 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:44.289221 | orchestrator | 2026-03-01 01:01:44 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:44.290557 | orchestrator | 2026-03-01 01:01:44 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:44.292092 | orchestrator | 2026-03-01 01:01:44 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:44.292141 | orchestrator | 2026-03-01 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:47.336063 | orchestrator | 2026-03-01 01:01:47 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:47.337791 | orchestrator | 2026-03-01 01:01:47 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:47.339562 | orchestrator | 2026-03-01 01:01:47 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:47.339699 | orchestrator | 2026-03-01 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:50.376065 | orchestrator | 2026-03-01 01:01:50 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:50.377377 | orchestrator | 2026-03-01 01:01:50 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:50.379624 | orchestrator | 2026-03-01 01:01:50 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:50.379962 | orchestrator | 2026-03-01 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:53.416611 | orchestrator | 2026-03-01 01:01:53 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:53.418091 | orchestrator | 2026-03-01 01:01:53 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:53.420427 | orchestrator | 2026-03-01 01:01:53 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:53.420477 | orchestrator | 2026-03-01 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:56.464782 | orchestrator | 2026-03-01 01:01:56 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:56.466824 | orchestrator | 2026-03-01 01:01:56 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:56.468428 | orchestrator | 2026-03-01 01:01:56 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:56.468466 | orchestrator | 2026-03-01 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:01:59.507376 | orchestrator | 2026-03-01 01:01:59 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:01:59.508084 | orchestrator | 2026-03-01 01:01:59 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:01:59.510957 | orchestrator | 2026-03-01 01:01:59 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:01:59.511050 | orchestrator | 2026-03-01 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:02.550513 | orchestrator | 2026-03-01 01:02:02 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:02.552685 | orchestrator | 2026-03-01 01:02:02 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:02:02.554093 | orchestrator | 2026-03-01 01:02:02 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:02.554124 | orchestrator | 2026-03-01 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:05.591296 | orchestrator | 2026-03-01 01:02:05 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:05.594992 | orchestrator | 2026-03-01 01:02:05 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state STARTED 2026-03-01 01:02:05.596300 | orchestrator | 2026-03-01 01:02:05 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:05.596336 | orchestrator | 2026-03-01 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:08.644912 | orchestrator | 2026-03-01 01:02:08 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:08.646551 | orchestrator | 2026-03-01 01:02:08 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:08.650064 | orchestrator | 2026-03-01 01:02:08 | INFO  | Task ba40da40-49cb-4042-8c25-651de6e569dd is in state SUCCESS 2026-03-01 01:02:08.651856 | orchestrator | 2026-03-01 01:02:08.651927 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-01 01:02:08.651936 | orchestrator | 2.16.14 2026-03-01 01:02:08.651941 | orchestrator | 2026-03-01 01:02:08.651946 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-01 01:02:08.651951 | orchestrator | 2026-03-01 01:02:08.651955 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-01 01:02:08.651960 | orchestrator | Sunday 01 March 2026 01:00:07 +0000 (0:00:00.530) 0:00:00.530 ********** 2026-03-01 01:02:08.651964 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:02:08.651969 | orchestrator | 2026-03-01 01:02:08.651976 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-01 01:02:08.651982 | orchestrator | Sunday 01 March 2026 01:00:07 +0000 (0:00:00.528) 0:00:01.059 ********** 2026-03-01 01:02:08.652076 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.652162 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.652441 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.652471 | orchestrator | 2026-03-01 01:02:08.652479 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-01 01:02:08.652487 | orchestrator | Sunday 01 March 2026 01:00:08 +0000 (0:00:00.541) 0:00:01.601 ********** 2026-03-01 01:02:08.652494 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.652501 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.652508 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.652514 | orchestrator | 2026-03-01 01:02:08.652521 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-01 01:02:08.652528 | orchestrator | Sunday 01 March 2026 01:00:08 +0000 (0:00:00.267) 0:00:01.869 ********** 2026-03-01 01:02:08.652535 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.652542 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.652549 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.652556 | orchestrator | 2026-03-01 01:02:08.652564 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-01 01:02:08.652571 | orchestrator | Sunday 01 March 2026 01:00:09 +0000 (0:00:00.658) 0:00:02.527 ********** 2026-03-01 01:02:08.652578 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.652585 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.652871 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.652896 | orchestrator | 2026-03-01 01:02:08.652904 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-01 01:02:08.652912 | orchestrator | Sunday 01 March 2026 01:00:09 +0000 (0:00:00.273) 0:00:02.801 ********** 2026-03-01 01:02:08.652918 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.652924 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.652930 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.652937 | orchestrator | 2026-03-01 01:02:08.652943 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-01 01:02:08.652950 | orchestrator | Sunday 01 March 2026 01:00:09 +0000 (0:00:00.271) 0:00:03.073 ********** 2026-03-01 01:02:08.652957 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.652963 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.652969 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.652975 | orchestrator | 2026-03-01 01:02:08.652982 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-01 01:02:08.652989 | orchestrator | Sunday 01 March 2026 01:00:10 +0000 (0:00:00.279) 0:00:03.352 ********** 2026-03-01 01:02:08.652995 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653003 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653010 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653016 | orchestrator | 2026-03-01 01:02:08.653022 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-01 01:02:08.653029 | orchestrator | Sunday 01 March 2026 01:00:10 +0000 (0:00:00.414) 0:00:03.766 ********** 2026-03-01 01:02:08.653036 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.653042 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.653048 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.653055 | orchestrator | 2026-03-01 01:02:08.653061 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-01 01:02:08.653068 | orchestrator | Sunday 01 March 2026 01:00:10 +0000 (0:00:00.273) 0:00:04.040 ********** 2026-03-01 01:02:08.653088 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:02:08.653094 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:02:08.653101 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:02:08.653107 | orchestrator | 2026-03-01 01:02:08.653114 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-01 01:02:08.653120 | orchestrator | Sunday 01 March 2026 01:00:11 +0000 (0:00:00.592) 0:00:04.632 ********** 2026-03-01 01:02:08.653127 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.653149 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.653157 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.653163 | orchestrator | 2026-03-01 01:02:08.653169 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-01 01:02:08.653176 | orchestrator | Sunday 01 March 2026 01:00:11 +0000 (0:00:00.433) 0:00:05.065 ********** 2026-03-01 01:02:08.653183 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:02:08.653188 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:02:08.653195 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:02:08.653201 | orchestrator | 2026-03-01 01:02:08.653208 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-01 01:02:08.653214 | orchestrator | Sunday 01 March 2026 01:00:13 +0000 (0:00:01.846) 0:00:06.912 ********** 2026-03-01 01:02:08.653221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-01 01:02:08.653227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-01 01:02:08.653234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-01 01:02:08.653241 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653248 | orchestrator | 2026-03-01 01:02:08.653298 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-01 01:02:08.653306 | orchestrator | Sunday 01 March 2026 01:00:14 +0000 (0:00:00.517) 0:00:07.430 ********** 2026-03-01 01:02:08.653315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.653325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.653332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.653339 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653345 | orchestrator | 2026-03-01 01:02:08.653352 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-01 01:02:08.653358 | orchestrator | Sunday 01 March 2026 01:00:15 +0000 (0:00:00.695) 0:00:08.125 ********** 2026-03-01 01:02:08.653367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.653376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.653383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.653396 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653403 | orchestrator | 2026-03-01 01:02:08.653409 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-01 01:02:08.653415 | orchestrator | Sunday 01 March 2026 01:00:15 +0000 (0:00:00.283) 0:00:08.409 ********** 2026-03-01 01:02:08.653525 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c4bc18730f48', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-01 01:00:12.562784', 'end': '2026-03-01 01:00:12.594005', 'delta': '0:00:00.031221', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c4bc18730f48'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-01 01:02:08.653544 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '65781cd756b7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-01 01:00:13.158334', 'end': '2026-03-01 01:00:13.191585', 'delta': '0:00:00.033251', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['65781cd756b7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-01 01:02:08.653584 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e794915693f9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-01 01:00:13.641896', 'end': '2026-03-01 01:00:13.683024', 'delta': '0:00:00.041128', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e794915693f9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-01 01:02:08.653592 | orchestrator | 2026-03-01 01:02:08.653599 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-01 01:02:08.653605 | orchestrator | Sunday 01 March 2026 01:00:15 +0000 (0:00:00.169) 0:00:08.578 ********** 2026-03-01 01:02:08.653612 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.653618 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.653721 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.653729 | orchestrator | 2026-03-01 01:02:08.653735 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-01 01:02:08.653740 | orchestrator | Sunday 01 March 2026 01:00:15 +0000 (0:00:00.373) 0:00:08.951 ********** 2026-03-01 01:02:08.653745 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-01 01:02:08.653751 | orchestrator | 2026-03-01 01:02:08.653756 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-01 01:02:08.653761 | orchestrator | Sunday 01 March 2026 01:00:17 +0000 (0:00:02.048) 0:00:11.000 ********** 2026-03-01 01:02:08.653765 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653770 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653775 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653779 | orchestrator | 2026-03-01 01:02:08.653784 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-01 01:02:08.653788 | orchestrator | Sunday 01 March 2026 01:00:18 +0000 (0:00:00.294) 0:00:11.295 ********** 2026-03-01 01:02:08.653800 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653804 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653807 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653811 | orchestrator | 2026-03-01 01:02:08.653815 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-01 01:02:08.653819 | orchestrator | Sunday 01 March 2026 01:00:18 +0000 (0:00:00.353) 0:00:11.648 ********** 2026-03-01 01:02:08.653823 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653827 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653831 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653835 | orchestrator | 2026-03-01 01:02:08.653838 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-01 01:02:08.653842 | orchestrator | Sunday 01 March 2026 01:00:18 +0000 (0:00:00.399) 0:00:12.047 ********** 2026-03-01 01:02:08.653846 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.653850 | orchestrator | 2026-03-01 01:02:08.653853 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-01 01:02:08.653857 | orchestrator | Sunday 01 March 2026 01:00:19 +0000 (0:00:00.141) 0:00:12.189 ********** 2026-03-01 01:02:08.653862 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653865 | orchestrator | 2026-03-01 01:02:08.653869 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-01 01:02:08.653873 | orchestrator | Sunday 01 March 2026 01:00:19 +0000 (0:00:00.217) 0:00:12.407 ********** 2026-03-01 01:02:08.653877 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653880 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653884 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653888 | orchestrator | 2026-03-01 01:02:08.653897 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-01 01:02:08.653901 | orchestrator | Sunday 01 March 2026 01:00:19 +0000 (0:00:00.264) 0:00:12.672 ********** 2026-03-01 01:02:08.653905 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653908 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653912 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653916 | orchestrator | 2026-03-01 01:02:08.653920 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-01 01:02:08.653923 | orchestrator | Sunday 01 March 2026 01:00:19 +0000 (0:00:00.286) 0:00:12.958 ********** 2026-03-01 01:02:08.653927 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653931 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653935 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653939 | orchestrator | 2026-03-01 01:02:08.653943 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-01 01:02:08.653946 | orchestrator | Sunday 01 March 2026 01:00:20 +0000 (0:00:00.399) 0:00:13.357 ********** 2026-03-01 01:02:08.653950 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653954 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653958 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653961 | orchestrator | 2026-03-01 01:02:08.653965 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-01 01:02:08.653969 | orchestrator | Sunday 01 March 2026 01:00:20 +0000 (0:00:00.300) 0:00:13.657 ********** 2026-03-01 01:02:08.653973 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.653977 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.653981 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.653984 | orchestrator | 2026-03-01 01:02:08.653988 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-01 01:02:08.653992 | orchestrator | Sunday 01 March 2026 01:00:20 +0000 (0:00:00.275) 0:00:13.933 ********** 2026-03-01 01:02:08.653996 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.654000 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.654004 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.654068 | orchestrator | 2026-03-01 01:02:08.654076 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-01 01:02:08.654084 | orchestrator | Sunday 01 March 2026 01:00:21 +0000 (0:00:00.271) 0:00:14.205 ********** 2026-03-01 01:02:08.654088 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.654092 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.654096 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.654099 | orchestrator | 2026-03-01 01:02:08.654103 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-01 01:02:08.654107 | orchestrator | Sunday 01 March 2026 01:00:21 +0000 (0:00:00.398) 0:00:14.604 ********** 2026-03-01 01:02:08.654113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272', 'dm-uuid-LVM-ZJGtCQF6v1S5Yu9yuOCiJGbLXIQrfHttVCKEY7DBdmSjbodhQeQY4g11ngYvfdI2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3', 'dm-uuid-LVM-LX5TJN4QJIjZNUTehHp2O357487HqAP19VEUTo3ChWgOBrMUqm1cCb2jg3r9YlwW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63', 'dm-uuid-LVM-mzgyAp4vw7ckb27duHzddo8Zn4qMBdwmux1G1ZIIWexZmaJzgKAreEkfySOmlweu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007', 'dm-uuid-LVM-v3o82Fgeuju9hDVXfQOZ5UaNsxeSpxK77njOjMRF9KSq1HMBwaxDGNX3CKMtxMIe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zdGVLJ-V5gm-IqSV-fzjR-xrHd-FM9P-oMCgkd', 'scsi-0QEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0', 'scsi-SQEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pQiiT0-1fQr-kPce-rgfU-KeAC-vxST-Vg7e3r', 'scsi-0QEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec', 'scsi-SQEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17', 'scsi-SQEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654361 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.654365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d', 'dm-uuid-LVM-wRmWDUNJPt67ozAI6V0Iyirq37GUM3D562kx6TYQ4CSJ4UCLlAwmSFyH40byWHN1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VT6Hxk-OOUG-qeJQ-fb6b-cwwz-OqYZ-9TvXjl', 'scsi-0QEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389', 'scsi-SQEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UW0p8v-atpJ-tsfM-QCHF-dCbs-8Eoy-5fYRbS', 'scsi-0QEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6', 'scsi-SQEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89', 'dm-uuid-LVM-2Ox7t6bo83O9jU0axPebCrOBB156JJHk65EARBuFNKoVl8g7TRHbfBMQXS65kqKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c', 'scsi-SQEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654445 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.654449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-01 01:02:08.654485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IIKyfs-DDRe-vOw6-n6TR-1J1I-YxeN-dtTeK0', 'scsi-0QEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa', 'scsi-SQEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R0IrbS-6ZVu-oH9t-3sKs-1lcJ-pg5J-AZQ5u1', 'scsi-0QEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7', 'scsi-SQEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c', 'scsi-SQEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-01 01:02:08.654519 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.654523 | orchestrator | 2026-03-01 01:02:08.654527 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-01 01:02:08.654531 | orchestrator | Sunday 01 March 2026 01:00:21 +0000 (0:00:00.482) 0:00:15.086 ********** 2026-03-01 01:02:08.654536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272', 'dm-uuid-LVM-ZJGtCQF6v1S5Yu9yuOCiJGbLXIQrfHttVCKEY7DBdmSjbodhQeQY4g11ngYvfdI2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3', 'dm-uuid-LVM-LX5TJN4QJIjZNUTehHp2O357487HqAP19VEUTo3ChWgOBrMUqm1cCb2jg3r9YlwW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654728 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_5e1112be-30db-4f57-b8d5-3281055496d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--31f22992--0e1a--5ef5--a8b3--14a12910c272-osd--block--31f22992--0e1a--5ef5--a8b3--14a12910c272'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zdGVLJ-V5gm-IqSV-fzjR-xrHd-FM9P-oMCgkd', 'scsi-0QEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0', 'scsi-SQEMU_QEMU_HARDDISK_13ef5d91-70cf-4b91-a3c5-d7eedb39bef0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63', 'dm-uuid-LVM-mzgyAp4vw7ckb27duHzddo8Zn4qMBdwmux1G1ZIIWexZmaJzgKAreEkfySOmlweu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3-osd--block--71bbeaa0--80e8--52b0--b7ca--02965d05b7d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pQiiT0-1fQr-kPce-rgfU-KeAC-vxST-Vg7e3r', 'scsi-0QEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec', 'scsi-SQEMU_QEMU_HARDDISK_538fc64d-5c22-41e2-8e6b-45fa8fa82fec'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007', 'dm-uuid-LVM-v3o82Fgeuju9hDVXfQOZ5UaNsxeSpxK77njOjMRF9KSq1HMBwaxDGNX3CKMtxMIe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17', 'scsi-SQEMU_QEMU_HARDDISK_fa955766-0e66-4eff-90a7-dd2f9191ad17'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654825 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654840 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.654855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d', 'dm-uuid-LVM-wRmWDUNJPt67ozAI6V0Iyirq37GUM3D562kx6TYQ4CSJ4UCLlAwmSFyH40byWHN1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16', 'scsi-SQEMU_QEMU_HARDDISK_906633b6-f217-4172-b29a-2cd328ecb060-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.654993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89', 'dm-uuid-LVM-2Ox7t6bo83O9jU0axPebCrOBB156JJHk65EARBuFNKoVl8g7TRHbfBMQXS65kqKL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--024d169c--08bb--513a--b447--fe5a7c318e63-osd--block--024d169c--08bb--513a--b447--fe5a7c318e63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VT6Hxk-OOUG-qeJQ-fb6b-cwwz-OqYZ-9TvXjl', 'scsi-0QEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389', 'scsi-SQEMU_QEMU_HARDDISK_13610e01-1185-4ea8-85ed-961cbe272389'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655022 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b33a93dc--e50a--56e8--9161--d310a7d41007-osd--block--b33a93dc--e50a--56e8--9161--d310a7d41007'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UW0p8v-atpJ-tsfM-QCHF-dCbs-8Eoy-5fYRbS', 'scsi-0QEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6', 'scsi-SQEMU_QEMU_HARDDISK_9e01ca4d-bc22-4e1f-86a3-dfd90b879ac6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655040 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c', 'scsi-SQEMU_QEMU_HARDDISK_eb2aa366-42c4-4388-b5bb-c244b0993c0c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655047 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655051 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655055 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_e86ac708-d159-4a58-aba3-0d32343dfb5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d-osd--block--14f5527d--3d57--5d3d--81f7--fd6f0358fc1d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IIKyfs-DDRe-vOw6-n6TR-1J1I-YxeN-dtTeK0', 'scsi-0QEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa', 'scsi-SQEMU_QEMU_HARDDISK_3ecd9c37-f666-48da-b9e6-5062929e61fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d1a7437a--a9c6--5afd--b028--da6f65a62b89-osd--block--d1a7437a--a9c6--5afd--b028--da6f65a62b89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-R0IrbS-6ZVu-oH9t-3sKs-1lcJ-pg5J-AZQ5u1', 'scsi-0QEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7', 'scsi-SQEMU_QEMU_HARDDISK_75e82ebc-a155-450e-9812-4025914dfeb7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c', 'scsi-SQEMU_QEMU_HARDDISK_0950a1db-ab80-47bb-a3df-92529f49175c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-01-00-03-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-01 01:02:08.655126 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655130 | orchestrator | 2026-03-01 01:02:08.655134 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-01 01:02:08.655139 | orchestrator | Sunday 01 March 2026 01:00:22 +0000 (0:00:00.531) 0:00:15.618 ********** 2026-03-01 01:02:08.655142 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.655147 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.655150 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.655154 | orchestrator | 2026-03-01 01:02:08.655158 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-01 01:02:08.655162 | orchestrator | Sunday 01 March 2026 01:00:23 +0000 (0:00:00.592) 0:00:16.211 ********** 2026-03-01 01:02:08.655170 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.655173 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.655177 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.655181 | orchestrator | 2026-03-01 01:02:08.655185 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-01 01:02:08.655189 | orchestrator | Sunday 01 March 2026 01:00:23 +0000 (0:00:00.372) 0:00:16.583 ********** 2026-03-01 01:02:08.655193 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.655197 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.655201 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.655204 | orchestrator | 2026-03-01 01:02:08.655208 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-01 01:02:08.655212 | orchestrator | Sunday 01 March 2026 01:00:24 +0000 (0:00:00.603) 0:00:17.187 ********** 2026-03-01 01:02:08.655216 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655220 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655224 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655227 | orchestrator | 2026-03-01 01:02:08.655231 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-01 01:02:08.655235 | orchestrator | Sunday 01 March 2026 01:00:24 +0000 (0:00:00.270) 0:00:17.458 ********** 2026-03-01 01:02:08.655239 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655243 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655247 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655250 | orchestrator | 2026-03-01 01:02:08.655254 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-01 01:02:08.655258 | orchestrator | Sunday 01 March 2026 01:00:24 +0000 (0:00:00.360) 0:00:17.818 ********** 2026-03-01 01:02:08.655262 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655266 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655269 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655273 | orchestrator | 2026-03-01 01:02:08.655277 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-01 01:02:08.655281 | orchestrator | Sunday 01 March 2026 01:00:25 +0000 (0:00:00.420) 0:00:18.239 ********** 2026-03-01 01:02:08.655285 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-01 01:02:08.655289 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-01 01:02:08.655293 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-01 01:02:08.655297 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-01 01:02:08.655301 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-01 01:02:08.655305 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-01 01:02:08.655309 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-01 01:02:08.655312 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-01 01:02:08.655316 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-01 01:02:08.655320 | orchestrator | 2026-03-01 01:02:08.655326 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-01 01:02:08.655330 | orchestrator | Sunday 01 March 2026 01:00:25 +0000 (0:00:00.751) 0:00:18.990 ********** 2026-03-01 01:02:08.655334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-01 01:02:08.655339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-01 01:02:08.655342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-01 01:02:08.655346 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-01 01:02:08.655354 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-01 01:02:08.655358 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-01 01:02:08.655362 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-01 01:02:08.655373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-01 01:02:08.655376 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-01 01:02:08.655380 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655384 | orchestrator | 2026-03-01 01:02:08.655388 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-01 01:02:08.655392 | orchestrator | Sunday 01 March 2026 01:00:26 +0000 (0:00:00.318) 0:00:19.309 ********** 2026-03-01 01:02:08.655396 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:02:08.655400 | orchestrator | 2026-03-01 01:02:08.655404 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-01 01:02:08.655409 | orchestrator | Sunday 01 March 2026 01:00:26 +0000 (0:00:00.607) 0:00:19.917 ********** 2026-03-01 01:02:08.655417 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655421 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655425 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655428 | orchestrator | 2026-03-01 01:02:08.655433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-01 01:02:08.655437 | orchestrator | Sunday 01 March 2026 01:00:27 +0000 (0:00:00.282) 0:00:20.199 ********** 2026-03-01 01:02:08.655441 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655444 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655448 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655452 | orchestrator | 2026-03-01 01:02:08.655456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-01 01:02:08.655460 | orchestrator | Sunday 01 March 2026 01:00:27 +0000 (0:00:00.284) 0:00:20.484 ********** 2026-03-01 01:02:08.655464 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655468 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655472 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:02:08.655476 | orchestrator | 2026-03-01 01:02:08.655480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-01 01:02:08.655484 | orchestrator | Sunday 01 March 2026 01:00:27 +0000 (0:00:00.274) 0:00:20.759 ********** 2026-03-01 01:02:08.655487 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.655491 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.655495 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.655499 | orchestrator | 2026-03-01 01:02:08.655503 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-01 01:02:08.655507 | orchestrator | Sunday 01 March 2026 01:00:28 +0000 (0:00:00.678) 0:00:21.437 ********** 2026-03-01 01:02:08.655511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:02:08.655515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:02:08.655518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:02:08.655522 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655526 | orchestrator | 2026-03-01 01:02:08.655530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-01 01:02:08.655534 | orchestrator | Sunday 01 March 2026 01:00:28 +0000 (0:00:00.357) 0:00:21.795 ********** 2026-03-01 01:02:08.655538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:02:08.655542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:02:08.655545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:02:08.655549 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655553 | orchestrator | 2026-03-01 01:02:08.655557 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-01 01:02:08.655561 | orchestrator | Sunday 01 March 2026 01:00:29 +0000 (0:00:00.375) 0:00:22.171 ********** 2026-03-01 01:02:08.655565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-01 01:02:08.655569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-01 01:02:08.655577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-01 01:02:08.655581 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655584 | orchestrator | 2026-03-01 01:02:08.655588 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-01 01:02:08.655592 | orchestrator | Sunday 01 March 2026 01:00:29 +0000 (0:00:00.365) 0:00:22.536 ********** 2026-03-01 01:02:08.655596 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:02:08.655600 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:02:08.655604 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:02:08.655608 | orchestrator | 2026-03-01 01:02:08.655612 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-01 01:02:08.655615 | orchestrator | Sunday 01 March 2026 01:00:29 +0000 (0:00:00.326) 0:00:22.863 ********** 2026-03-01 01:02:08.655620 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-01 01:02:08.655645 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-01 01:02:08.655653 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-01 01:02:08.655657 | orchestrator | 2026-03-01 01:02:08.655661 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-01 01:02:08.655669 | orchestrator | Sunday 01 March 2026 01:00:30 +0000 (0:00:00.531) 0:00:23.394 ********** 2026-03-01 01:02:08.655673 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:02:08.655677 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:02:08.655681 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:02:08.655685 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-01 01:02:08.655691 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-01 01:02:08.655698 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-01 01:02:08.655709 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-01 01:02:08.655714 | orchestrator | 2026-03-01 01:02:08.655720 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-01 01:02:08.655726 | orchestrator | Sunday 01 March 2026 01:00:31 +0000 (0:00:00.873) 0:00:24.268 ********** 2026-03-01 01:02:08.655733 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-01 01:02:08.655739 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-01 01:02:08.655745 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-01 01:02:08.655750 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-01 01:02:08.655756 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-01 01:02:08.655763 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-01 01:02:08.655775 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-01 01:02:08.655782 | orchestrator | 2026-03-01 01:02:08.655789 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-01 01:02:08.655795 | orchestrator | Sunday 01 March 2026 01:00:32 +0000 (0:00:01.639) 0:00:25.907 ********** 2026-03-01 01:02:08.655801 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:02:08.655807 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:02:08.655811 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-01 01:02:08.655816 | orchestrator | 2026-03-01 01:02:08.655820 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-01 01:02:08.655824 | orchestrator | Sunday 01 March 2026 01:00:33 +0000 (0:00:00.345) 0:00:26.252 ********** 2026-03-01 01:02:08.655830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:02:08.655841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:02:08.655845 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:02:08.655848 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:02:08.655852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-01 01:02:08.655856 | orchestrator | 2026-03-01 01:02:08.655860 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-01 01:02:08.655864 | orchestrator | Sunday 01 March 2026 01:01:16 +0000 (0:00:43.284) 0:01:09.537 ********** 2026-03-01 01:02:08.655868 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655872 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655876 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655880 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655883 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655887 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655895 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-01 01:02:08.655899 | orchestrator | 2026-03-01 01:02:08.655903 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-01 01:02:08.655906 | orchestrator | Sunday 01 March 2026 01:01:38 +0000 (0:00:21.929) 0:01:31.466 ********** 2026-03-01 01:02:08.655910 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655915 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655918 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655922 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655926 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655934 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-01 01:02:08.655937 | orchestrator | 2026-03-01 01:02:08.655941 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-01 01:02:08.655945 | orchestrator | Sunday 01 March 2026 01:01:50 +0000 (0:00:12.372) 0:01:43.839 ********** 2026-03-01 01:02:08.655949 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655953 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:02:08.655957 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:02:08.655965 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655969 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:02:08.655976 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:02:08.655980 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655984 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:02:08.655988 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:02:08.655992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.655996 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:02:08.656000 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:02:08.656004 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.656007 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:02:08.656011 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:02:08.656015 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-01 01:02:08.656019 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-01 01:02:08.656023 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-01 01:02:08.656027 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-01 01:02:08.656031 | orchestrator | 2026-03-01 01:02:08.656035 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:02:08.656039 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-01 01:02:08.656044 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-01 01:02:08.656048 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-01 01:02:08.656052 | orchestrator | 2026-03-01 01:02:08.656056 | orchestrator | 2026-03-01 01:02:08.656060 | orchestrator | 2026-03-01 01:02:08.656064 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:02:08.656068 | orchestrator | Sunday 01 March 2026 01:02:07 +0000 (0:00:16.373) 0:02:00.213 ********** 2026-03-01 01:02:08.656071 | orchestrator | =============================================================================== 2026-03-01 01:02:08.656075 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.28s 2026-03-01 01:02:08.656079 | orchestrator | generate keys ---------------------------------------------------------- 21.93s 2026-03-01 01:02:08.656083 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.37s 2026-03-01 01:02:08.656087 | orchestrator | get keys from monitors ------------------------------------------------- 12.37s 2026-03-01 01:02:08.656091 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.05s 2026-03-01 01:02:08.656095 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.85s 2026-03-01 01:02:08.656099 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.64s 2026-03-01 01:02:08.656103 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-03-01 01:02:08.656107 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.75s 2026-03-01 01:02:08.656111 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.70s 2026-03-01 01:02:08.656120 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2026-03-01 01:02:08.656128 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.66s 2026-03-01 01:02:08.656132 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.61s 2026-03-01 01:02:08.656136 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.60s 2026-03-01 01:02:08.656139 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.59s 2026-03-01 01:02:08.656143 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2026-03-01 01:02:08.656147 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.54s 2026-03-01 01:02:08.656151 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.53s 2026-03-01 01:02:08.656155 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.53s 2026-03-01 01:02:08.656159 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.53s 2026-03-01 01:02:08.656162 | orchestrator | 2026-03-01 01:02:08 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:08.656166 | orchestrator | 2026-03-01 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:11.704992 | orchestrator | 2026-03-01 01:02:11 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:11.707552 | orchestrator | 2026-03-01 01:02:11 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:11.709003 | orchestrator | 2026-03-01 01:02:11 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:11.709069 | orchestrator | 2026-03-01 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:14.748397 | orchestrator | 2026-03-01 01:02:14 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:14.750493 | orchestrator | 2026-03-01 01:02:14 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:14.751306 | orchestrator | 2026-03-01 01:02:14 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:14.751326 | orchestrator | 2026-03-01 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:17.795262 | orchestrator | 2026-03-01 01:02:17 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:17.798283 | orchestrator | 2026-03-01 01:02:17 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:17.802537 | orchestrator | 2026-03-01 01:02:17 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:17.802585 | orchestrator | 2026-03-01 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:20.847886 | orchestrator | 2026-03-01 01:02:20 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:20.850307 | orchestrator | 2026-03-01 01:02:20 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:20.852465 | orchestrator | 2026-03-01 01:02:20 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:20.852872 | orchestrator | 2026-03-01 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:23.902842 | orchestrator | 2026-03-01 01:02:23 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:23.905140 | orchestrator | 2026-03-01 01:02:23 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:23.907201 | orchestrator | 2026-03-01 01:02:23 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:23.907245 | orchestrator | 2026-03-01 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:26.959768 | orchestrator | 2026-03-01 01:02:26 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:26.960055 | orchestrator | 2026-03-01 01:02:26 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:26.963460 | orchestrator | 2026-03-01 01:02:26 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:26.963725 | orchestrator | 2026-03-01 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:30.001904 | orchestrator | 2026-03-01 01:02:30 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:30.004156 | orchestrator | 2026-03-01 01:02:30 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:30.115046 | orchestrator | 2026-03-01 01:02:30 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:30.115098 | orchestrator | 2026-03-01 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:33.045520 | orchestrator | 2026-03-01 01:02:33 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:33.047813 | orchestrator | 2026-03-01 01:02:33 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:33.049376 | orchestrator | 2026-03-01 01:02:33 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state STARTED 2026-03-01 01:02:33.049628 | orchestrator | 2026-03-01 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:36.089796 | orchestrator | 2026-03-01 01:02:36 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:36.093771 | orchestrator | 2026-03-01 01:02:36 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:36.095994 | orchestrator | 2026-03-01 01:02:36 | INFO  | Task 19608426-1169-4594-82f1-6e666e65ead8 is in state SUCCESS 2026-03-01 01:02:36.096046 | orchestrator | 2026-03-01 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:36.096790 | orchestrator | 2026-03-01 01:02:36.096830 | orchestrator | 2026-03-01 01:02:36.096839 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:02:36.096846 | orchestrator | 2026-03-01 01:02:36.096853 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:02:36.096860 | orchestrator | Sunday 01 March 2026 01:01:00 +0000 (0:00:00.232) 0:00:00.232 ********** 2026-03-01 01:02:36.096866 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.096873 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.096879 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.096886 | orchestrator | 2026-03-01 01:02:36.096893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:02:36.096899 | orchestrator | Sunday 01 March 2026 01:01:00 +0000 (0:00:00.282) 0:00:00.515 ********** 2026-03-01 01:02:36.097025 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-01 01:02:36.097032 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-01 01:02:36.097036 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-01 01:02:36.097040 | orchestrator | 2026-03-01 01:02:36.097043 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-01 01:02:36.097047 | orchestrator | 2026-03-01 01:02:36.097051 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-01 01:02:36.097055 | orchestrator | Sunday 01 March 2026 01:01:00 +0000 (0:00:00.359) 0:00:00.874 ********** 2026-03-01 01:02:36.097059 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:02:36.097064 | orchestrator | 2026-03-01 01:02:36.097067 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-01 01:02:36.097083 | orchestrator | Sunday 01 March 2026 01:01:01 +0000 (0:00:00.454) 0:00:01.328 ********** 2026-03-01 01:02:36.097109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.097125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.097135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.097140 | orchestrator | 2026-03-01 01:02:36.097144 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-01 01:02:36.097148 | orchestrator | Sunday 01 March 2026 01:01:02 +0000 (0:00:01.155) 0:00:02.484 ********** 2026-03-01 01:02:36.097152 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097156 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097159 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097164 | orchestrator | 2026-03-01 01:02:36.097168 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-01 01:02:36.097172 | orchestrator | Sunday 01 March 2026 01:01:02 +0000 (0:00:00.363) 0:00:02.847 ********** 2026-03-01 01:02:36.097176 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-01 01:02:36.097183 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-01 01:02:36.097187 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-01 01:02:36.097191 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-01 01:02:36.097195 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-01 01:02:36.097199 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-01 01:02:36.097203 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-01 01:02:36.097209 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-01 01:02:36.097213 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-01 01:02:36.097218 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-01 01:02:36.097225 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-01 01:02:36.097233 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-01 01:02:36.097243 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-01 01:02:36.097250 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-01 01:02:36.097256 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-01 01:02:36.097263 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-01 01:02:36.097269 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-01 01:02:36.097275 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-01 01:02:36.097281 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-01 01:02:36.097287 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-01 01:02:36.097293 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-01 01:02:36.097299 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-01 01:02:36.097307 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-01 01:02:36.097313 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-01 01:02:36.097321 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-01 01:02:36.097330 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-01 01:02:36.097337 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-01 01:02:36.097344 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-01 01:02:36.097351 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-01 01:02:36.097358 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-01 01:02:36.097365 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-01 01:02:36.097369 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-01 01:02:36.097373 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-01 01:02:36.097377 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-01 01:02:36.097381 | orchestrator | 2026-03-01 01:02:36.097385 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097389 | orchestrator | Sunday 01 March 2026 01:01:03 +0000 (0:00:00.634) 0:00:03.482 ********** 2026-03-01 01:02:36.097396 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097400 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097404 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097408 | orchestrator | 2026-03-01 01:02:36.097417 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097421 | orchestrator | Sunday 01 March 2026 01:01:03 +0000 (0:00:00.275) 0:00:03.757 ********** 2026-03-01 01:02:36.097425 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097429 | orchestrator | 2026-03-01 01:02:36.097436 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097440 | orchestrator | Sunday 01 March 2026 01:01:03 +0000 (0:00:00.127) 0:00:03.885 ********** 2026-03-01 01:02:36.097444 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097448 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097452 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097456 | orchestrator | 2026-03-01 01:02:36.097460 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097463 | orchestrator | Sunday 01 March 2026 01:01:04 +0000 (0:00:00.386) 0:00:04.272 ********** 2026-03-01 01:02:36.097467 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097471 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097475 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097479 | orchestrator | 2026-03-01 01:02:36.097483 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097487 | orchestrator | Sunday 01 March 2026 01:01:04 +0000 (0:00:00.304) 0:00:04.576 ********** 2026-03-01 01:02:36.097491 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097494 | orchestrator | 2026-03-01 01:02:36.097498 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097502 | orchestrator | Sunday 01 March 2026 01:01:04 +0000 (0:00:00.122) 0:00:04.699 ********** 2026-03-01 01:02:36.097506 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097510 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097514 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097518 | orchestrator | 2026-03-01 01:02:36.097521 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097525 | orchestrator | Sunday 01 March 2026 01:01:04 +0000 (0:00:00.276) 0:00:04.975 ********** 2026-03-01 01:02:36.097529 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097533 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097537 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097540 | orchestrator | 2026-03-01 01:02:36.097544 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097548 | orchestrator | Sunday 01 March 2026 01:01:05 +0000 (0:00:00.339) 0:00:05.314 ********** 2026-03-01 01:02:36.097552 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097556 | orchestrator | 2026-03-01 01:02:36.097560 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097564 | orchestrator | Sunday 01 March 2026 01:01:05 +0000 (0:00:00.292) 0:00:05.607 ********** 2026-03-01 01:02:36.097568 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097571 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097575 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097579 | orchestrator | 2026-03-01 01:02:36.097599 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097606 | orchestrator | Sunday 01 March 2026 01:01:05 +0000 (0:00:00.290) 0:00:05.897 ********** 2026-03-01 01:02:36.097610 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097614 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097618 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097622 | orchestrator | 2026-03-01 01:02:36.097626 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097630 | orchestrator | Sunday 01 March 2026 01:01:06 +0000 (0:00:00.291) 0:00:06.188 ********** 2026-03-01 01:02:36.097637 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097641 | orchestrator | 2026-03-01 01:02:36.097645 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097649 | orchestrator | Sunday 01 March 2026 01:01:06 +0000 (0:00:00.119) 0:00:06.308 ********** 2026-03-01 01:02:36.097653 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097656 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097660 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097664 | orchestrator | 2026-03-01 01:02:36.097668 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097672 | orchestrator | Sunday 01 March 2026 01:01:06 +0000 (0:00:00.273) 0:00:06.581 ********** 2026-03-01 01:02:36.097676 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097680 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097684 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097688 | orchestrator | 2026-03-01 01:02:36.097691 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097695 | orchestrator | Sunday 01 March 2026 01:01:06 +0000 (0:00:00.435) 0:00:07.016 ********** 2026-03-01 01:02:36.097699 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097703 | orchestrator | 2026-03-01 01:02:36.097707 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097718 | orchestrator | Sunday 01 March 2026 01:01:06 +0000 (0:00:00.123) 0:00:07.140 ********** 2026-03-01 01:02:36.097722 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097730 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097734 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097738 | orchestrator | 2026-03-01 01:02:36.097742 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097746 | orchestrator | Sunday 01 March 2026 01:01:07 +0000 (0:00:00.262) 0:00:07.403 ********** 2026-03-01 01:02:36.097749 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097753 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097757 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097761 | orchestrator | 2026-03-01 01:02:36.097765 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097769 | orchestrator | Sunday 01 March 2026 01:01:07 +0000 (0:00:00.390) 0:00:07.793 ********** 2026-03-01 01:02:36.097773 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097777 | orchestrator | 2026-03-01 01:02:36.097781 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097785 | orchestrator | Sunday 01 March 2026 01:01:07 +0000 (0:00:00.119) 0:00:07.913 ********** 2026-03-01 01:02:36.097789 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097793 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097797 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097801 | orchestrator | 2026-03-01 01:02:36.097805 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097812 | orchestrator | Sunday 01 March 2026 01:01:08 +0000 (0:00:00.263) 0:00:08.176 ********** 2026-03-01 01:02:36.097817 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097821 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097825 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097829 | orchestrator | 2026-03-01 01:02:36.097833 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097837 | orchestrator | Sunday 01 March 2026 01:01:08 +0000 (0:00:00.415) 0:00:08.592 ********** 2026-03-01 01:02:36.097840 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097844 | orchestrator | 2026-03-01 01:02:36.097848 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097853 | orchestrator | Sunday 01 March 2026 01:01:08 +0000 (0:00:00.139) 0:00:08.731 ********** 2026-03-01 01:02:36.097856 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097860 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097864 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097871 | orchestrator | 2026-03-01 01:02:36.097875 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097878 | orchestrator | Sunday 01 March 2026 01:01:08 +0000 (0:00:00.254) 0:00:08.986 ********** 2026-03-01 01:02:36.097882 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097886 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097890 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097894 | orchestrator | 2026-03-01 01:02:36.097898 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097902 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.278) 0:00:09.264 ********** 2026-03-01 01:02:36.097906 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097909 | orchestrator | 2026-03-01 01:02:36.097913 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097917 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.126) 0:00:09.390 ********** 2026-03-01 01:02:36.097921 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097925 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097929 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097933 | orchestrator | 2026-03-01 01:02:36.097937 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.097940 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.372) 0:00:09.763 ********** 2026-03-01 01:02:36.097944 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.097948 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.097952 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.097956 | orchestrator | 2026-03-01 01:02:36.097960 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.097964 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.270) 0:00:10.033 ********** 2026-03-01 01:02:36.097968 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097972 | orchestrator | 2026-03-01 01:02:36.097975 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.097979 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.123) 0:00:10.157 ********** 2026-03-01 01:02:36.097983 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.097987 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.097991 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.097995 | orchestrator | 2026-03-01 01:02:36.097998 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-01 01:02:36.098002 | orchestrator | Sunday 01 March 2026 01:01:10 +0000 (0:00:00.298) 0:00:10.456 ********** 2026-03-01 01:02:36.098006 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:02:36.098010 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:02:36.098040 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:02:36.098044 | orchestrator | 2026-03-01 01:02:36.098048 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-01 01:02:36.098052 | orchestrator | Sunday 01 March 2026 01:01:10 +0000 (0:00:00.277) 0:00:10.733 ********** 2026-03-01 01:02:36.098056 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098060 | orchestrator | 2026-03-01 01:02:36.098064 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-01 01:02:36.098068 | orchestrator | Sunday 01 March 2026 01:01:10 +0000 (0:00:00.135) 0:00:10.869 ********** 2026-03-01 01:02:36.098072 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098075 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.098079 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.098083 | orchestrator | 2026-03-01 01:02:36.098087 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-01 01:02:36.098091 | orchestrator | Sunday 01 March 2026 01:01:11 +0000 (0:00:00.400) 0:00:11.270 ********** 2026-03-01 01:02:36.098094 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:02:36.098098 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:02:36.098102 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:02:36.098106 | orchestrator | 2026-03-01 01:02:36.098115 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-01 01:02:36.098119 | orchestrator | Sunday 01 March 2026 01:01:12 +0000 (0:00:01.391) 0:00:12.662 ********** 2026-03-01 01:02:36.098123 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-01 01:02:36.098127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-01 01:02:36.098131 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-01 01:02:36.098134 | orchestrator | 2026-03-01 01:02:36.098138 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-01 01:02:36.098142 | orchestrator | Sunday 01 March 2026 01:01:13 +0000 (0:00:01.484) 0:00:14.147 ********** 2026-03-01 01:02:36.098146 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-01 01:02:36.098150 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-01 01:02:36.098154 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-01 01:02:36.098158 | orchestrator | 2026-03-01 01:02:36.098162 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-01 01:02:36.098169 | orchestrator | Sunday 01 March 2026 01:01:16 +0000 (0:00:02.650) 0:00:16.797 ********** 2026-03-01 01:02:36.098172 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-01 01:02:36.098176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-01 01:02:36.098180 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-01 01:02:36.098184 | orchestrator | 2026-03-01 01:02:36.098188 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-01 01:02:36.098192 | orchestrator | Sunday 01 March 2026 01:01:18 +0000 (0:00:01.944) 0:00:18.742 ********** 2026-03-01 01:02:36.098195 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098199 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.098203 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.098207 | orchestrator | 2026-03-01 01:02:36.098211 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-01 01:02:36.098215 | orchestrator | Sunday 01 March 2026 01:01:18 +0000 (0:00:00.318) 0:00:19.061 ********** 2026-03-01 01:02:36.098218 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098222 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.098226 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.098230 | orchestrator | 2026-03-01 01:02:36.098234 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-01 01:02:36.098238 | orchestrator | Sunday 01 March 2026 01:01:19 +0000 (0:00:00.376) 0:00:19.437 ********** 2026-03-01 01:02:36.098241 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:02:36.098245 | orchestrator | 2026-03-01 01:02:36.098249 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-01 01:02:36.098253 | orchestrator | Sunday 01 March 2026 01:01:20 +0000 (0:00:00.764) 0:00:20.201 ********** 2026-03-01 01:02:36.098260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.098271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.098281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.098288 | orchestrator | 2026-03-01 01:02:36.098292 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-01 01:02:36.098296 | orchestrator | Sunday 01 March 2026 01:01:21 +0000 (0:00:01.412) 0:00:21.614 ********** 2026-03-01 01:02:36.098305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 01:02:36.098316 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 01:02:36.098343 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.098351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 01:02:36.098362 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.098368 | orchestrator | 2026-03-01 01:02:36.098376 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-01 01:02:36.098383 | orchestrator | Sunday 01 March 2026 01:01:22 +0000 (0:00:00.673) 0:00:22.288 ********** 2026-03-01 01:02:36.098398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 01:02:36.098406 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 01:02:36.098425 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.098439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-01 01:02:36.098447 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.098454 | orchestrator | 2026-03-01 01:02:36.098462 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-01 01:02:36.098466 | orchestrator | Sunday 01 March 2026 01:01:22 +0000 (0:00:00.844) 0:00:23.132 ********** 2026-03-01 01:02:36.098473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.098483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.098493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-01 01:02:36.098499 | orchestrator | 2026-03-01 01:02:36.098505 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-01 01:02:36.098516 | orchestrator | Sunday 01 March 2026 01:01:24 +0000 (0:00:01.449) 0:00:24.582 ********** 2026-03-01 01:02:36.098523 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:02:36.098530 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:02:36.098536 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:02:36.098542 | orchestrator | 2026-03-01 01:02:36.098548 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-01 01:02:36.098554 | orchestrator | Sunday 01 March 2026 01:01:24 +0000 (0:00:00.274) 0:00:24.857 ********** 2026-03-01 01:02:36.098560 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:02:36.098567 | orchestrator | 2026-03-01 01:02:36.098573 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-01 01:02:36.098631 | orchestrator | Sunday 01 March 2026 01:01:25 +0000 (0:00:00.501) 0:00:25.358 ********** 2026-03-01 01:02:36.098637 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:02:36.098641 | orchestrator | 2026-03-01 01:02:36.098645 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-01 01:02:36.098649 | orchestrator | Sunday 01 March 2026 01:01:27 +0000 (0:00:02.302) 0:00:27.661 ********** 2026-03-01 01:02:36.098653 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:02:36.098657 | orchestrator | 2026-03-01 01:02:36.098661 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-01 01:02:36.098665 | orchestrator | Sunday 01 March 2026 01:01:30 +0000 (0:00:02.688) 0:00:30.350 ********** 2026-03-01 01:02:36.098669 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:02:36.098673 | orchestrator | 2026-03-01 01:02:36.098677 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-01 01:02:36.098685 | orchestrator | Sunday 01 March 2026 01:01:46 +0000 (0:00:15.913) 0:00:46.263 ********** 2026-03-01 01:02:36.098688 | orchestrator | 2026-03-01 01:02:36.098692 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-01 01:02:36.098696 | orchestrator | Sunday 01 March 2026 01:01:46 +0000 (0:00:00.061) 0:00:46.324 ********** 2026-03-01 01:02:36.098700 | orchestrator | 2026-03-01 01:02:36.098704 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-01 01:02:36.098708 | orchestrator | Sunday 01 March 2026 01:01:46 +0000 (0:00:00.061) 0:00:46.385 ********** 2026-03-01 01:02:36.098711 | orchestrator | 2026-03-01 01:02:36.098715 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-01 01:02:36.098719 | orchestrator | Sunday 01 March 2026 01:01:46 +0000 (0:00:00.067) 0:00:46.452 ********** 2026-03-01 01:02:36.098723 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:02:36.098726 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:02:36.098730 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:02:36.098734 | orchestrator | 2026-03-01 01:02:36.098740 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:02:36.098747 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-01 01:02:36.098754 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-01 01:02:36.098760 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-01 01:02:36.098766 | orchestrator | 2026-03-01 01:02:36.098772 | orchestrator | 2026-03-01 01:02:36.098778 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:02:36.098784 | orchestrator | Sunday 01 March 2026 01:02:33 +0000 (0:00:47.101) 0:01:33.553 ********** 2026-03-01 01:02:36.098798 | orchestrator | =============================================================================== 2026-03-01 01:02:36.098809 | orchestrator | horizon : Restart horizon container ------------------------------------ 47.10s 2026-03-01 01:02:36.098820 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.91s 2026-03-01 01:02:36.098830 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.69s 2026-03-01 01:02:36.098839 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.65s 2026-03-01 01:02:36.098847 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.30s 2026-03-01 01:02:36.098856 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.95s 2026-03-01 01:02:36.098865 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.48s 2026-03-01 01:02:36.098874 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.45s 2026-03-01 01:02:36.098883 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.41s 2026-03-01 01:02:36.098891 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.39s 2026-03-01 01:02:36.098901 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2026-03-01 01:02:36.098910 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.84s 2026-03-01 01:02:36.098927 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-03-01 01:02:36.098937 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-03-01 01:02:36.098947 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-03-01 01:02:36.098957 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-03-01 01:02:36.098967 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.45s 2026-03-01 01:02:36.098974 | orchestrator | horizon : Update policy file name --------------------------------------- 0.44s 2026-03-01 01:02:36.098997 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2026-03-01 01:02:36.099013 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.40s 2026-03-01 01:02:39.138168 | orchestrator | 2026-03-01 01:02:39 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:39.139679 | orchestrator | 2026-03-01 01:02:39 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:39.139722 | orchestrator | 2026-03-01 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:42.182809 | orchestrator | 2026-03-01 01:02:42 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state STARTED 2026-03-01 01:02:42.186958 | orchestrator | 2026-03-01 01:02:42 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:42.187011 | orchestrator | 2026-03-01 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:45.225833 | orchestrator | 2026-03-01 01:02:45 | INFO  | Task f23ecf8c-7649-4c33-90c4-73e0fcbde987 is in state SUCCESS 2026-03-01 01:02:45.227231 | orchestrator | 2026-03-01 01:02:45 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:45.228154 | orchestrator | 2026-03-01 01:02:45 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:02:45.228190 | orchestrator | 2026-03-01 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:48.272476 | orchestrator | 2026-03-01 01:02:48 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:48.274355 | orchestrator | 2026-03-01 01:02:48 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:02:48.274401 | orchestrator | 2026-03-01 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:51.319277 | orchestrator | 2026-03-01 01:02:51 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:51.320991 | orchestrator | 2026-03-01 01:02:51 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:02:51.321048 | orchestrator | 2026-03-01 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:54.365578 | orchestrator | 2026-03-01 01:02:54 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:54.368465 | orchestrator | 2026-03-01 01:02:54 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:02:54.368647 | orchestrator | 2026-03-01 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:02:57.416198 | orchestrator | 2026-03-01 01:02:57 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:02:57.418111 | orchestrator | 2026-03-01 01:02:57 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:02:57.418762 | orchestrator | 2026-03-01 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:00.461048 | orchestrator | 2026-03-01 01:03:00 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:00.463074 | orchestrator | 2026-03-01 01:03:00 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:00.463240 | orchestrator | 2026-03-01 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:03.507172 | orchestrator | 2026-03-01 01:03:03 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:03.509142 | orchestrator | 2026-03-01 01:03:03 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:03.509255 | orchestrator | 2026-03-01 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:06.552004 | orchestrator | 2026-03-01 01:03:06 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:06.554180 | orchestrator | 2026-03-01 01:03:06 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:06.554235 | orchestrator | 2026-03-01 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:09.595977 | orchestrator | 2026-03-01 01:03:09 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:09.596836 | orchestrator | 2026-03-01 01:03:09 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:09.596877 | orchestrator | 2026-03-01 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:12.645184 | orchestrator | 2026-03-01 01:03:12 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:12.647721 | orchestrator | 2026-03-01 01:03:12 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:12.647768 | orchestrator | 2026-03-01 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:15.691933 | orchestrator | 2026-03-01 01:03:15 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:15.693475 | orchestrator | 2026-03-01 01:03:15 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:15.693891 | orchestrator | 2026-03-01 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:18.731277 | orchestrator | 2026-03-01 01:03:18 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:18.732797 | orchestrator | 2026-03-01 01:03:18 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:18.733303 | orchestrator | 2026-03-01 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:21.778334 | orchestrator | 2026-03-01 01:03:21 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:21.780073 | orchestrator | 2026-03-01 01:03:21 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:21.780149 | orchestrator | 2026-03-01 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:24.824240 | orchestrator | 2026-03-01 01:03:24 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:24.825905 | orchestrator | 2026-03-01 01:03:24 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:24.825951 | orchestrator | 2026-03-01 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:27.870114 | orchestrator | 2026-03-01 01:03:27 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:27.872959 | orchestrator | 2026-03-01 01:03:27 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:27.873010 | orchestrator | 2026-03-01 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:30.917078 | orchestrator | 2026-03-01 01:03:30 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:30.919841 | orchestrator | 2026-03-01 01:03:30 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:30.919891 | orchestrator | 2026-03-01 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:33.964291 | orchestrator | 2026-03-01 01:03:33 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state STARTED 2026-03-01 01:03:33.967707 | orchestrator | 2026-03-01 01:03:33 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:33.968100 | orchestrator | 2026-03-01 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:36.999904 | orchestrator | 2026-03-01 01:03:36.999964 | orchestrator | 2026-03-01 01:03:36.999975 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-01 01:03:36.999986 | orchestrator | 2026-03-01 01:03:36.999995 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-01 01:03:37 | orchestrator | Sunday 01 March 2026 01:02:11 +0000 (0:00:00.149) 0:00:00.149 ********** 2026-03-01 01:03:37.000005 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-01 01:03:37.000010 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000015 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000019 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:03:37.000024 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000029 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-01 01:03:37.000033 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-01 01:03:37.000037 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-01 01:03:37.000050 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-01 01:03:37.000055 | orchestrator | 2026-03-01 01:03:37.000059 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-01 01:03:37.000064 | orchestrator | Sunday 01 March 2026 01:02:15 +0000 (0:00:04.319) 0:00:04.468 ********** 2026-03-01 01:03:37.000068 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-01 01:03:37.000073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000077 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000082 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:03:37.000086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000091 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-01 01:03:37.000095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-01 01:03:37.000100 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-01 01:03:37.000105 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-01 01:03:37.000109 | orchestrator | 2026-03-01 01:03:37.000114 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-01 01:03:37.000225 | orchestrator | Sunday 01 March 2026 01:02:19 +0000 (0:00:03.971) 0:00:08.440 ********** 2026-03-01 01:03:37.000230 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-01 01:03:37.000235 | orchestrator | 2026-03-01 01:03:37.000270 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-01 01:03:37.000276 | orchestrator | Sunday 01 March 2026 01:02:20 +0000 (0:00:00.989) 0:00:09.429 ********** 2026-03-01 01:03:37.000280 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-01 01:03:37.000285 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000301 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000306 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:03:37.000311 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000315 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-01 01:03:37.000320 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-01 01:03:37.000325 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-01 01:03:37.000329 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-01 01:03:37.000334 | orchestrator | 2026-03-01 01:03:37.000549 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-01 01:03:37.000555 | orchestrator | Sunday 01 March 2026 01:02:33 +0000 (0:00:12.870) 0:00:22.300 ********** 2026-03-01 01:03:37.000560 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-01 01:03:37.000564 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-01 01:03:37.000570 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-01 01:03:37.000574 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-01 01:03:37.000659 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-01 01:03:37.000666 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-01 01:03:37.000671 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-01 01:03:37.000676 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-01 01:03:37.000680 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-01 01:03:37.000685 | orchestrator | 2026-03-01 01:03:37.000689 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-01 01:03:37.000694 | orchestrator | Sunday 01 March 2026 01:02:36 +0000 (0:00:02.727) 0:00:25.027 ********** 2026-03-01 01:03:37.000699 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-01 01:03:37.000704 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000709 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000713 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:03:37.000718 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-01 01:03:37.000722 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-01 01:03:37.000735 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-01 01:03:37.000742 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-01 01:03:37.000750 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-01 01:03:37.000758 | orchestrator | 2026-03-01 01:03:37.000765 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:03:37.000772 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:03:37.000781 | orchestrator | 2026-03-01 01:03:37.000790 | orchestrator | 2026-03-01 01:03:37.000799 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:03:37.000807 | orchestrator | Sunday 01 March 2026 01:02:42 +0000 (0:00:06.072) 0:00:31.100 ********** 2026-03-01 01:03:37.000826 | orchestrator | =============================================================================== 2026-03-01 01:03:37.000831 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.87s 2026-03-01 01:03:37.000836 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.07s 2026-03-01 01:03:37.000840 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.32s 2026-03-01 01:03:37.000845 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.97s 2026-03-01 01:03:37.000849 | orchestrator | Check if target directories exist --------------------------------------- 2.73s 2026-03-01 01:03:37.000854 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2026-03-01 01:03:37.000858 | orchestrator | 2026-03-01 01:03:37.000863 | orchestrator | 2026-03-01 01:03:37.000867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:03:37.000872 | orchestrator | 2026-03-01 01:03:37.000876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:03:37.000881 | orchestrator | Sunday 01 March 2026 01:01:00 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-01 01:03:37.000885 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:03:37.000890 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:03:37.000895 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:03:37.000899 | orchestrator | 2026-03-01 01:03:37.000904 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:03:37.000910 | orchestrator | Sunday 01 March 2026 01:01:00 +0000 (0:00:00.272) 0:00:00.510 ********** 2026-03-01 01:03:37.000942 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-01 01:03:37.000952 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-01 01:03:37.000960 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-01 01:03:37.000969 | orchestrator | 2026-03-01 01:03:37.000977 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-01 01:03:37.000986 | orchestrator | 2026-03-01 01:03:37.000994 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-01 01:03:37.001003 | orchestrator | Sunday 01 March 2026 01:01:00 +0000 (0:00:00.368) 0:00:00.878 ********** 2026-03-01 01:03:37.001012 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:03:37.001021 | orchestrator | 2026-03-01 01:03:37.001029 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-01 01:03:37.001034 | orchestrator | Sunday 01 March 2026 01:01:01 +0000 (0:00:00.500) 0:00:01.378 ********** 2026-03-01 01:03:37.001068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001147 | orchestrator | 2026-03-01 01:03:37.001151 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-01 01:03:37.001156 | orchestrator | Sunday 01 March 2026 01:01:03 +0000 (0:00:02.050) 0:00:03.428 ********** 2026-03-01 01:03:37.001161 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001203 | orchestrator | 2026-03-01 01:03:37.001210 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-01 01:03:37.001215 | orchestrator | Sunday 01 March 2026 01:01:03 +0000 (0:00:00.124) 0:00:03.552 ********** 2026-03-01 01:03:37.001219 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001224 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001229 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001233 | orchestrator | 2026-03-01 01:03:37.001238 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-01 01:03:37.001242 | orchestrator | Sunday 01 March 2026 01:01:03 +0000 (0:00:00.360) 0:00:03.913 ********** 2026-03-01 01:03:37.001247 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:03:37.001252 | orchestrator | 2026-03-01 01:03:37.001256 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-01 01:03:37.001261 | orchestrator | Sunday 01 March 2026 01:01:04 +0000 (0:00:00.821) 0:00:04.734 ********** 2026-03-01 01:03:37.001265 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:03:37.001270 | orchestrator | 2026-03-01 01:03:37.001275 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-01 01:03:37.001279 | orchestrator | Sunday 01 March 2026 01:01:05 +0000 (0:00:00.471) 0:00:05.206 ********** 2026-03-01 01:03:37.001299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001359 | orchestrator | 2026-03-01 01:03:37.001364 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-01 01:03:37.001369 | orchestrator | Sunday 01 March 2026 01:01:08 +0000 (0:00:03.705) 0:00:08.911 ********** 2026-03-01 01:03:37.001374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001391 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001423 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001450 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001454 | orchestrator | 2026-03-01 01:03:37.001459 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-01 01:03:37.001464 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.529) 0:00:09.440 ********** 2026-03-01 01:03:37.001471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001500 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh2026-03-01 01:03:36 | INFO  | Task d443c950-21b8-439f-93cc-a3c9b093c008 is in state SUCCESS 2026-03-01 01:03:37.001528 | orchestrator | 2026-03-01 01:03:36 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:37.001533 | orchestrator | :2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001545 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001567 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001572 | orchestrator | 2026-03-01 01:03:37.001577 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-01 01:03:37.001582 | orchestrator | Sunday 01 March 2026 01:01:09 +0000 (0:00:00.677) 0:00:10.117 ********** 2026-03-01 01:03:37.001591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001663 | orchestrator | 2026-03-01 01:03:37.001668 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-01 01:03:37.001673 | orchestrator | Sunday 01 March 2026 01:01:13 +0000 (0:00:03.160) 0:00:13.277 ********** 2026-03-01 01:03:37.001681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.001718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.001740 | orchestrator | 2026-03-01 01:03:37.001745 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-01 01:03:37.001750 | orchestrator | Sunday 01 March 2026 01:01:18 +0000 (0:00:05.585) 0:00:18.863 ********** 2026-03-01 01:03:37.001757 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.001762 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:03:37.001767 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:03:37.001771 | orchestrator | 2026-03-01 01:03:37.001776 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-01 01:03:37.001780 | orchestrator | Sunday 01 March 2026 01:01:20 +0000 (0:00:01.536) 0:00:20.400 ********** 2026-03-01 01:03:37.001785 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001790 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001794 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001799 | orchestrator | 2026-03-01 01:03:37.001803 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-01 01:03:37.001808 | orchestrator | Sunday 01 March 2026 01:01:21 +0000 (0:00:00.729) 0:00:21.129 ********** 2026-03-01 01:03:37.001813 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001820 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001825 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001830 | orchestrator | 2026-03-01 01:03:37.001834 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-01 01:03:37.001839 | orchestrator | Sunday 01 March 2026 01:01:21 +0000 (0:00:00.285) 0:00:21.415 ********** 2026-03-01 01:03:37.001844 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001848 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001853 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001859 | orchestrator | 2026-03-01 01:03:37.001865 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-01 01:03:37.001870 | orchestrator | Sunday 01 March 2026 01:01:21 +0000 (0:00:00.462) 0:00:21.877 ********** 2026-03-01 01:03:37.001876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001899 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001930 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-01 01:03:37.001945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-01 01:03:37.001951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-01 01:03:37.001956 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001962 | orchestrator | 2026-03-01 01:03:37.001967 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-01 01:03:37.001973 | orchestrator | Sunday 01 March 2026 01:01:22 +0000 (0:00:00.645) 0:00:22.523 ********** 2026-03-01 01:03:37.001981 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.001987 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.001992 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.001998 | orchestrator | 2026-03-01 01:03:37.002005 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-01 01:03:37.002041 | orchestrator | Sunday 01 March 2026 01:01:22 +0000 (0:00:00.325) 0:00:22.848 ********** 2026-03-01 01:03:37.002048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-01 01:03:37.002054 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-01 01:03:37.002059 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-01 01:03:37.002065 | orchestrator | 2026-03-01 01:03:37.002071 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-01 01:03:37.002076 | orchestrator | Sunday 01 March 2026 01:01:24 +0000 (0:00:01.485) 0:00:24.334 ********** 2026-03-01 01:03:37.002082 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:03:37.002088 | orchestrator | 2026-03-01 01:03:37.002093 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-01 01:03:37.002099 | orchestrator | Sunday 01 March 2026 01:01:25 +0000 (0:00:01.193) 0:00:25.528 ********** 2026-03-01 01:03:37.002105 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.002110 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.002115 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.002121 | orchestrator | 2026-03-01 01:03:37.002126 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-01 01:03:37.002132 | orchestrator | Sunday 01 March 2026 01:01:26 +0000 (0:00:00.765) 0:00:26.293 ********** 2026-03-01 01:03:37.002137 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-01 01:03:37.002143 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:03:37.002148 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-01 01:03:37.002154 | orchestrator | 2026-03-01 01:03:37.002159 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-01 01:03:37.002165 | orchestrator | Sunday 01 March 2026 01:01:27 +0000 (0:00:01.026) 0:00:27.320 ********** 2026-03-01 01:03:37.002171 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:03:37.002176 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:03:37.002181 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:03:37.002185 | orchestrator | 2026-03-01 01:03:37.002190 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-01 01:03:37.002195 | orchestrator | Sunday 01 March 2026 01:01:27 +0000 (0:00:00.292) 0:00:27.613 ********** 2026-03-01 01:03:37.002199 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-01 01:03:37.002204 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-01 01:03:37.002208 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-01 01:03:37.002213 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-01 01:03:37.002218 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-01 01:03:37.002222 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-01 01:03:37.002227 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-01 01:03:37.002231 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-01 01:03:37.002236 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-01 01:03:37.002241 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-01 01:03:37.002248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-01 01:03:37.002253 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-01 01:03:37.002260 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-01 01:03:37.002265 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-01 01:03:37.002270 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-01 01:03:37.002275 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-01 01:03:37.002279 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-01 01:03:37.002284 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-01 01:03:37.002288 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-01 01:03:37.002293 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-01 01:03:37.002298 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-01 01:03:37.002302 | orchestrator | 2026-03-01 01:03:37.002307 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-01 01:03:37.002311 | orchestrator | Sunday 01 March 2026 01:01:36 +0000 (0:00:09.433) 0:00:37.046 ********** 2026-03-01 01:03:37.002316 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-01 01:03:37.002323 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-01 01:03:37.002328 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-01 01:03:37.002332 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-01 01:03:37.002337 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-01 01:03:37.002342 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-01 01:03:37.002346 | orchestrator | 2026-03-01 01:03:37.002351 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-01 01:03:37.002355 | orchestrator | Sunday 01 March 2026 01:01:39 +0000 (0:00:02.923) 0:00:39.970 ********** 2026-03-01 01:03:37.002360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.002366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.002377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-01 01:03:37.002384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.002389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.002394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-01 01:03:37.002399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.002407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.002414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-01 01:03:37.002419 | orchestrator | 2026-03-01 01:03:37.002423 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-01 01:03:37.002428 | orchestrator | Sunday 01 March 2026 01:01:42 +0000 (0:00:02.378) 0:00:42.349 ********** 2026-03-01 01:03:37.002433 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.002437 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.002442 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.002446 | orchestrator | 2026-03-01 01:03:37.002451 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-01 01:03:37.002456 | orchestrator | Sunday 01 March 2026 01:01:42 +0000 (0:00:00.313) 0:00:42.662 ********** 2026-03-01 01:03:37.002460 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002465 | orchestrator | 2026-03-01 01:03:37.002469 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-01 01:03:37.002474 | orchestrator | Sunday 01 March 2026 01:01:45 +0000 (0:00:02.471) 0:00:45.134 ********** 2026-03-01 01:03:37.002479 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002483 | orchestrator | 2026-03-01 01:03:37.002501 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-01 01:03:37.002505 | orchestrator | Sunday 01 March 2026 01:01:47 +0000 (0:00:02.678) 0:00:47.812 ********** 2026-03-01 01:03:37.002510 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:03:37.002514 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:03:37.002521 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:03:37.002528 | orchestrator | 2026-03-01 01:03:37.002538 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-01 01:03:37.002549 | orchestrator | Sunday 01 March 2026 01:01:48 +0000 (0:00:01.131) 0:00:48.944 ********** 2026-03-01 01:03:37.002556 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:03:37.002564 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:03:37.002571 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:03:37.002578 | orchestrator | 2026-03-01 01:03:37.002586 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-01 01:03:37.002594 | orchestrator | Sunday 01 March 2026 01:01:49 +0000 (0:00:00.421) 0:00:49.365 ********** 2026-03-01 01:03:37.002602 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.002610 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.002618 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.002626 | orchestrator | 2026-03-01 01:03:37.002633 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-01 01:03:37.002637 | orchestrator | Sunday 01 March 2026 01:01:49 +0000 (0:00:00.411) 0:00:49.777 ********** 2026-03-01 01:03:37.002648 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002652 | orchestrator | 2026-03-01 01:03:37.002657 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-01 01:03:37.002661 | orchestrator | Sunday 01 March 2026 01:02:02 +0000 (0:00:13.321) 0:01:03.099 ********** 2026-03-01 01:03:37.002666 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002670 | orchestrator | 2026-03-01 01:03:37.002675 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-01 01:03:37.002679 | orchestrator | Sunday 01 March 2026 01:02:14 +0000 (0:00:11.224) 0:01:14.323 ********** 2026-03-01 01:03:37.002684 | orchestrator | 2026-03-01 01:03:37.002689 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-01 01:03:37.002693 | orchestrator | Sunday 01 March 2026 01:02:14 +0000 (0:00:00.075) 0:01:14.398 ********** 2026-03-01 01:03:37.002698 | orchestrator | 2026-03-01 01:03:37.002703 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-01 01:03:37.002707 | orchestrator | Sunday 01 March 2026 01:02:14 +0000 (0:00:00.058) 0:01:14.456 ********** 2026-03-01 01:03:37.002712 | orchestrator | 2026-03-01 01:03:37.002716 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-01 01:03:37.002721 | orchestrator | Sunday 01 March 2026 01:02:14 +0000 (0:00:00.060) 0:01:14.517 ********** 2026-03-01 01:03:37.002725 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002730 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:03:37.002735 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:03:37.002739 | orchestrator | 2026-03-01 01:03:37.002744 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-01 01:03:37.002749 | orchestrator | Sunday 01 March 2026 01:02:26 +0000 (0:00:12.519) 0:01:27.036 ********** 2026-03-01 01:03:37.002753 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002758 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:03:37.002762 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:03:37.002767 | orchestrator | 2026-03-01 01:03:37.002772 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-01 01:03:37.002776 | orchestrator | Sunday 01 March 2026 01:02:36 +0000 (0:00:09.148) 0:01:36.184 ********** 2026-03-01 01:03:37.002781 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:03:37.002785 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:03:37.002790 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002795 | orchestrator | 2026-03-01 01:03:37.002799 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-01 01:03:37.002804 | orchestrator | Sunday 01 March 2026 01:02:43 +0000 (0:00:07.470) 0:01:43.655 ********** 2026-03-01 01:03:37.002809 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:03:37.002813 | orchestrator | 2026-03-01 01:03:37.002818 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-01 01:03:37.002822 | orchestrator | Sunday 01 March 2026 01:02:44 +0000 (0:00:00.621) 0:01:44.276 ********** 2026-03-01 01:03:37.002827 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:03:37.002831 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:03:37.002836 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:03:37.002841 | orchestrator | 2026-03-01 01:03:37.002845 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-01 01:03:37.002850 | orchestrator | Sunday 01 March 2026 01:02:44 +0000 (0:00:00.681) 0:01:44.958 ********** 2026-03-01 01:03:37.002859 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:03:37.002863 | orchestrator | 2026-03-01 01:03:37.002868 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-01 01:03:37.002873 | orchestrator | Sunday 01 March 2026 01:02:46 +0000 (0:00:01.570) 0:01:46.528 ********** 2026-03-01 01:03:37.002877 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-01 01:03:37.002882 | orchestrator | 2026-03-01 01:03:37.002886 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-01 01:03:37.002894 | orchestrator | Sunday 01 March 2026 01:02:58 +0000 (0:00:11.870) 0:01:58.399 ********** 2026-03-01 01:03:37.002899 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-01 01:03:37.002903 | orchestrator | 2026-03-01 01:03:37.002908 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-01 01:03:37.002912 | orchestrator | Sunday 01 March 2026 01:03:23 +0000 (0:00:25.187) 0:02:23.587 ********** 2026-03-01 01:03:37.002917 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-01 01:03:37.002922 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-01 01:03:37.002926 | orchestrator | 2026-03-01 01:03:37.002931 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-01 01:03:37.002935 | orchestrator | Sunday 01 March 2026 01:03:30 +0000 (0:00:06.619) 0:02:30.206 ********** 2026-03-01 01:03:37.002940 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.002944 | orchestrator | 2026-03-01 01:03:37.002951 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-01 01:03:37.002956 | orchestrator | Sunday 01 March 2026 01:03:30 +0000 (0:00:00.129) 0:02:30.335 ********** 2026-03-01 01:03:37.002960 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.002965 | orchestrator | 2026-03-01 01:03:37.002970 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-01 01:03:37.002974 | orchestrator | Sunday 01 March 2026 01:03:30 +0000 (0:00:00.125) 0:02:30.461 ********** 2026-03-01 01:03:37.002979 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.002983 | orchestrator | 2026-03-01 01:03:37.002988 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-01 01:03:37.002992 | orchestrator | Sunday 01 March 2026 01:03:30 +0000 (0:00:00.120) 0:02:30.582 ********** 2026-03-01 01:03:37.002997 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.003001 | orchestrator | 2026-03-01 01:03:37.003006 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-01 01:03:37.003010 | orchestrator | Sunday 01 March 2026 01:03:30 +0000 (0:00:00.522) 0:02:31.104 ********** 2026-03-01 01:03:37.003015 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:03:37.003023 | orchestrator | 2026-03-01 01:03:37.003030 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-01 01:03:37.003038 | orchestrator | Sunday 01 March 2026 01:03:34 +0000 (0:00:03.947) 0:02:35.051 ********** 2026-03-01 01:03:37.003046 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:03:37.003054 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:03:37.003061 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:03:37.003068 | orchestrator | 2026-03-01 01:03:37.003076 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:03:37.003084 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-01 01:03:37.003093 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-01 01:03:37.003101 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-01 01:03:37.003109 | orchestrator | 2026-03-01 01:03:37.003117 | orchestrator | 2026-03-01 01:03:37.003123 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:03:37.003128 | orchestrator | Sunday 01 March 2026 01:03:35 +0000 (0:00:00.428) 0:02:35.480 ********** 2026-03-01 01:03:37.003132 | orchestrator | =============================================================================== 2026-03-01 01:03:37.003137 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.19s 2026-03-01 01:03:37.003141 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.32s 2026-03-01 01:03:37.003150 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 12.52s 2026-03-01 01:03:37.003155 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.87s 2026-03-01 01:03:37.003159 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.22s 2026-03-01 01:03:37.003164 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.43s 2026-03-01 01:03:37.003169 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.15s 2026-03-01 01:03:37.003173 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.47s 2026-03-01 01:03:37.003178 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.62s 2026-03-01 01:03:37.003182 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.59s 2026-03-01 01:03:37.003187 | orchestrator | keystone : Creating default user role ----------------------------------- 3.95s 2026-03-01 01:03:37.003191 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.71s 2026-03-01 01:03:37.003196 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.16s 2026-03-01 01:03:37.003200 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.92s 2026-03-01 01:03:37.003209 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.68s 2026-03-01 01:03:37.003214 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.47s 2026-03-01 01:03:37.003218 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.38s 2026-03-01 01:03:37.003223 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.05s 2026-03-01 01:03:37.003227 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.57s 2026-03-01 01:03:37.003232 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.54s 2026-03-01 01:03:37.003236 | orchestrator | 2026-03-01 01:03:36 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:37.003241 | orchestrator | 2026-03-01 01:03:37 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:37.003246 | orchestrator | 2026-03-01 01:03:37 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:37.003250 | orchestrator | 2026-03-01 01:03:37 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:37.003255 | orchestrator | 2026-03-01 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:40.033232 | orchestrator | 2026-03-01 01:03:40 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:40.033726 | orchestrator | 2026-03-01 01:03:40 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:40.033919 | orchestrator | 2026-03-01 01:03:40 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state STARTED 2026-03-01 01:03:40.034753 | orchestrator | 2026-03-01 01:03:40 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:40.035661 | orchestrator | 2026-03-01 01:03:40 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:40.035720 | orchestrator | 2026-03-01 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:43.067145 | orchestrator | 2026-03-01 01:03:43 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:43.068303 | orchestrator | 2026-03-01 01:03:43 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:43.072394 | orchestrator | 2026-03-01 01:03:43 | INFO  | Task aca514bb-489b-4504-97ef-3ad87aaf2bf2 is in state SUCCESS 2026-03-01 01:03:43.074349 | orchestrator | 2026-03-01 01:03:43 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:43.076000 | orchestrator | 2026-03-01 01:03:43 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:03:43.077559 | orchestrator | 2026-03-01 01:03:43 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:43.077840 | orchestrator | 2026-03-01 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:46.157572 | orchestrator | 2026-03-01 01:03:46 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:46.160935 | orchestrator | 2026-03-01 01:03:46 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:46.162966 | orchestrator | 2026-03-01 01:03:46 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:46.165305 | orchestrator | 2026-03-01 01:03:46 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:03:46.168022 | orchestrator | 2026-03-01 01:03:46 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:46.168084 | orchestrator | 2026-03-01 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:49.202896 | orchestrator | 2026-03-01 01:03:49 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:49.204539 | orchestrator | 2026-03-01 01:03:49 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:49.206403 | orchestrator | 2026-03-01 01:03:49 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:49.207707 | orchestrator | 2026-03-01 01:03:49 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:03:49.209026 | orchestrator | 2026-03-01 01:03:49 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:49.209059 | orchestrator | 2026-03-01 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:52.255086 | orchestrator | 2026-03-01 01:03:52 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:52.257377 | orchestrator | 2026-03-01 01:03:52 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:52.258901 | orchestrator | 2026-03-01 01:03:52 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:52.259507 | orchestrator | 2026-03-01 01:03:52 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:03:52.262168 | orchestrator | 2026-03-01 01:03:52 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:52.262214 | orchestrator | 2026-03-01 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:55.310845 | orchestrator | 2026-03-01 01:03:55 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:55.312198 | orchestrator | 2026-03-01 01:03:55 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:55.315513 | orchestrator | 2026-03-01 01:03:55 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:55.317258 | orchestrator | 2026-03-01 01:03:55 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:03:55.319537 | orchestrator | 2026-03-01 01:03:55 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:55.319602 | orchestrator | 2026-03-01 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:03:58.363222 | orchestrator | 2026-03-01 01:03:58 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:03:58.363328 | orchestrator | 2026-03-01 01:03:58 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:03:58.364208 | orchestrator | 2026-03-01 01:03:58 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:03:58.364359 | orchestrator | 2026-03-01 01:03:58 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:03:58.365274 | orchestrator | 2026-03-01 01:03:58 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:03:58.365318 | orchestrator | 2026-03-01 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:01.397387 | orchestrator | 2026-03-01 01:04:01 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:01.399145 | orchestrator | 2026-03-01 01:04:01 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:01.400578 | orchestrator | 2026-03-01 01:04:01 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:01.402156 | orchestrator | 2026-03-01 01:04:01 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:01.403535 | orchestrator | 2026-03-01 01:04:01 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:01.403573 | orchestrator | 2026-03-01 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:04.449557 | orchestrator | 2026-03-01 01:04:04 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:04.452145 | orchestrator | 2026-03-01 01:04:04 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:04.453725 | orchestrator | 2026-03-01 01:04:04 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:04.456073 | orchestrator | 2026-03-01 01:04:04 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:04.458628 | orchestrator | 2026-03-01 01:04:04 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:04.459229 | orchestrator | 2026-03-01 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:07.496955 | orchestrator | 2026-03-01 01:04:07 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:07.500390 | orchestrator | 2026-03-01 01:04:07 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:07.505761 | orchestrator | 2026-03-01 01:04:07 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:07.508801 | orchestrator | 2026-03-01 01:04:07 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:07.511323 | orchestrator | 2026-03-01 01:04:07 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:07.511497 | orchestrator | 2026-03-01 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:10.548769 | orchestrator | 2026-03-01 01:04:10 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:10.550322 | orchestrator | 2026-03-01 01:04:10 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:10.551807 | orchestrator | 2026-03-01 01:04:10 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:10.553690 | orchestrator | 2026-03-01 01:04:10 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:10.555239 | orchestrator | 2026-03-01 01:04:10 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:10.555283 | orchestrator | 2026-03-01 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:13.597682 | orchestrator | 2026-03-01 01:04:13 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:13.598166 | orchestrator | 2026-03-01 01:04:13 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:13.599500 | orchestrator | 2026-03-01 01:04:13 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:13.600820 | orchestrator | 2026-03-01 01:04:13 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:13.601571 | orchestrator | 2026-03-01 01:04:13 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:13.601607 | orchestrator | 2026-03-01 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:16.626841 | orchestrator | 2026-03-01 01:04:16 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:16.628233 | orchestrator | 2026-03-01 01:04:16 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:16.628994 | orchestrator | 2026-03-01 01:04:16 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:16.630978 | orchestrator | 2026-03-01 01:04:16 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:16.631581 | orchestrator | 2026-03-01 01:04:16 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:16.631628 | orchestrator | 2026-03-01 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:19.664805 | orchestrator | 2026-03-01 01:04:19 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:19.666084 | orchestrator | 2026-03-01 01:04:19 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:19.667284 | orchestrator | 2026-03-01 01:04:19 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:19.669610 | orchestrator | 2026-03-01 01:04:19 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:19.670804 | orchestrator | 2026-03-01 01:04:19 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:19.670844 | orchestrator | 2026-03-01 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:22.704318 | orchestrator | 2026-03-01 01:04:22 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:22.705926 | orchestrator | 2026-03-01 01:04:22 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:22.706047 | orchestrator | 2026-03-01 01:04:22 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:22.708278 | orchestrator | 2026-03-01 01:04:22 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:22.709050 | orchestrator | 2026-03-01 01:04:22 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:22.709075 | orchestrator | 2026-03-01 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:25.734778 | orchestrator | 2026-03-01 01:04:25 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:25.735977 | orchestrator | 2026-03-01 01:04:25 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:25.736904 | orchestrator | 2026-03-01 01:04:25 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:25.737699 | orchestrator | 2026-03-01 01:04:25 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:25.739494 | orchestrator | 2026-03-01 01:04:25 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:25.739533 | orchestrator | 2026-03-01 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:28.768829 | orchestrator | 2026-03-01 01:04:28 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:28.770516 | orchestrator | 2026-03-01 01:04:28 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:28.771234 | orchestrator | 2026-03-01 01:04:28 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:28.772650 | orchestrator | 2026-03-01 01:04:28 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:28.773466 | orchestrator | 2026-03-01 01:04:28 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:28.773503 | orchestrator | 2026-03-01 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:31.798263 | orchestrator | 2026-03-01 01:04:31 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:31.798804 | orchestrator | 2026-03-01 01:04:31 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:31.799706 | orchestrator | 2026-03-01 01:04:31 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:31.800531 | orchestrator | 2026-03-01 01:04:31 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:31.801193 | orchestrator | 2026-03-01 01:04:31 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:31.801218 | orchestrator | 2026-03-01 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:34.823717 | orchestrator | 2026-03-01 01:04:34 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:34.824591 | orchestrator | 2026-03-01 01:04:34 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:34.825535 | orchestrator | 2026-03-01 01:04:34 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:34.826441 | orchestrator | 2026-03-01 01:04:34 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:34.827197 | orchestrator | 2026-03-01 01:04:34 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:34.827222 | orchestrator | 2026-03-01 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:37.849717 | orchestrator | 2026-03-01 01:04:37 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:37.850184 | orchestrator | 2026-03-01 01:04:37 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:37.851178 | orchestrator | 2026-03-01 01:04:37 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:37.854052 | orchestrator | 2026-03-01 01:04:37 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:37.854824 | orchestrator | 2026-03-01 01:04:37 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:37.854868 | orchestrator | 2026-03-01 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:40.880815 | orchestrator | 2026-03-01 01:04:40 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:40.881443 | orchestrator | 2026-03-01 01:04:40 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:40.882223 | orchestrator | 2026-03-01 01:04:40 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:40.883152 | orchestrator | 2026-03-01 01:04:40 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:40.883840 | orchestrator | 2026-03-01 01:04:40 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:40.883923 | orchestrator | 2026-03-01 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:43.908270 | orchestrator | 2026-03-01 01:04:43 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:43.908591 | orchestrator | 2026-03-01 01:04:43 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:43.909538 | orchestrator | 2026-03-01 01:04:43 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:43.910239 | orchestrator | 2026-03-01 01:04:43 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:43.910745 | orchestrator | 2026-03-01 01:04:43 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:43.910861 | orchestrator | 2026-03-01 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:46.929649 | orchestrator | 2026-03-01 01:04:46 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:46.930242 | orchestrator | 2026-03-01 01:04:46 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:46.930839 | orchestrator | 2026-03-01 01:04:46 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:46.931526 | orchestrator | 2026-03-01 01:04:46 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:46.934070 | orchestrator | 2026-03-01 01:04:46 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:46.934115 | orchestrator | 2026-03-01 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:50.059086 | orchestrator | 2026-03-01 01:04:50 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:50.059717 | orchestrator | 2026-03-01 01:04:50 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:50.061521 | orchestrator | 2026-03-01 01:04:50 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:50.062720 | orchestrator | 2026-03-01 01:04:50 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:50.063495 | orchestrator | 2026-03-01 01:04:50 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:50.063525 | orchestrator | 2026-03-01 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:53.086544 | orchestrator | 2026-03-01 01:04:53 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:53.087193 | orchestrator | 2026-03-01 01:04:53 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:53.088650 | orchestrator | 2026-03-01 01:04:53 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:53.089331 | orchestrator | 2026-03-01 01:04:53 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:53.091348 | orchestrator | 2026-03-01 01:04:53 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:53.091412 | orchestrator | 2026-03-01 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:56.122140 | orchestrator | 2026-03-01 01:04:56 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:56.123321 | orchestrator | 2026-03-01 01:04:56 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:56.123977 | orchestrator | 2026-03-01 01:04:56 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:56.124735 | orchestrator | 2026-03-01 01:04:56 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:56.125545 | orchestrator | 2026-03-01 01:04:56 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:56.126315 | orchestrator | 2026-03-01 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:04:59.153932 | orchestrator | 2026-03-01 01:04:59 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:04:59.154183 | orchestrator | 2026-03-01 01:04:59 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:04:59.154934 | orchestrator | 2026-03-01 01:04:59 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:04:59.155549 | orchestrator | 2026-03-01 01:04:59 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state STARTED 2026-03-01 01:04:59.156187 | orchestrator | 2026-03-01 01:04:59 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:04:59.156214 | orchestrator | 2026-03-01 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:02.202848 | orchestrator | 2026-03-01 01:05:02 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:02.202899 | orchestrator | 2026-03-01 01:05:02 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:02.203740 | orchestrator | 2026-03-01 01:05:02 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:02.205121 | orchestrator | 2026-03-01 01:05:02.205159 | orchestrator | 2026-03-01 01:05:02.205164 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-01 01:05:02.205168 | orchestrator | 2026-03-01 01:05:02.205172 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-01 01:05:02.205175 | orchestrator | Sunday 01 March 2026 01:02:46 +0000 (0:00:00.221) 0:00:00.221 ********** 2026-03-01 01:05:02.205178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-01 01:05:02.205182 | orchestrator | 2026-03-01 01:05:02.205186 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-01 01:05:02.205189 | orchestrator | Sunday 01 March 2026 01:02:46 +0000 (0:00:00.217) 0:00:00.438 ********** 2026-03-01 01:05:02.205193 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-01 01:05:02.205196 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-01 01:05:02.205200 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-01 01:05:02.205204 | orchestrator | 2026-03-01 01:05:02.205207 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-01 01:05:02.205210 | orchestrator | Sunday 01 March 2026 01:02:47 +0000 (0:00:01.183) 0:00:01.622 ********** 2026-03-01 01:05:02.205213 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-01 01:05:02.205217 | orchestrator | 2026-03-01 01:05:02.205220 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-01 01:05:02.205223 | orchestrator | Sunday 01 March 2026 01:02:48 +0000 (0:00:01.203) 0:00:02.825 ********** 2026-03-01 01:05:02.205226 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205229 | orchestrator | 2026-03-01 01:05:02.205251 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-01 01:05:02.205255 | orchestrator | Sunday 01 March 2026 01:02:49 +0000 (0:00:00.797) 0:00:03.622 ********** 2026-03-01 01:05:02.205258 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205261 | orchestrator | 2026-03-01 01:05:02.205264 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-01 01:05:02.205267 | orchestrator | Sunday 01 March 2026 01:02:50 +0000 (0:00:00.768) 0:00:04.391 ********** 2026-03-01 01:05:02.205270 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-01 01:05:02.205273 | orchestrator | ok: [testbed-manager] 2026-03-01 01:05:02.205277 | orchestrator | 2026-03-01 01:05:02.205280 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-01 01:05:02.205283 | orchestrator | Sunday 01 March 2026 01:03:31 +0000 (0:00:40.532) 0:00:44.923 ********** 2026-03-01 01:05:02.205286 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-01 01:05:02.205289 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-01 01:05:02.205293 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-01 01:05:02.205296 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-01 01:05:02.205299 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-01 01:05:02.205302 | orchestrator | 2026-03-01 01:05:02.205305 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-01 01:05:02.205308 | orchestrator | Sunday 01 March 2026 01:03:35 +0000 (0:00:04.076) 0:00:49.000 ********** 2026-03-01 01:05:02.205311 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-01 01:05:02.205314 | orchestrator | 2026-03-01 01:05:02.205317 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-01 01:05:02.205320 | orchestrator | Sunday 01 March 2026 01:03:35 +0000 (0:00:00.418) 0:00:49.418 ********** 2026-03-01 01:05:02.205323 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:05:02.205327 | orchestrator | 2026-03-01 01:05:02.205330 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-01 01:05:02.205333 | orchestrator | Sunday 01 March 2026 01:03:35 +0000 (0:00:00.111) 0:00:49.530 ********** 2026-03-01 01:05:02.205336 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:05:02.205339 | orchestrator | 2026-03-01 01:05:02.205342 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-01 01:05:02.205404 | orchestrator | Sunday 01 March 2026 01:03:36 +0000 (0:00:00.428) 0:00:49.959 ********** 2026-03-01 01:05:02.205407 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205410 | orchestrator | 2026-03-01 01:05:02.205414 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-01 01:05:02.205420 | orchestrator | Sunday 01 March 2026 01:03:37 +0000 (0:00:01.302) 0:00:51.261 ********** 2026-03-01 01:05:02.205424 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205432 | orchestrator | 2026-03-01 01:05:02.205439 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-01 01:05:02.205444 | orchestrator | Sunday 01 March 2026 01:03:38 +0000 (0:00:00.736) 0:00:51.997 ********** 2026-03-01 01:05:02.205450 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205455 | orchestrator | 2026-03-01 01:05:02.205460 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-01 01:05:02.205466 | orchestrator | Sunday 01 March 2026 01:03:38 +0000 (0:00:00.770) 0:00:52.768 ********** 2026-03-01 01:05:02.205472 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-01 01:05:02.205477 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-01 01:05:02.205482 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-01 01:05:02.205485 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-01 01:05:02.205488 | orchestrator | 2026-03-01 01:05:02.205491 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:05:02.205494 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-01 01:05:02.205502 | orchestrator | 2026-03-01 01:05:02.205505 | orchestrator | 2026-03-01 01:05:02.205516 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:05:02.205519 | orchestrator | Sunday 01 March 2026 01:03:40 +0000 (0:00:01.166) 0:00:53.934 ********** 2026-03-01 01:05:02.205523 | orchestrator | =============================================================================== 2026-03-01 01:05:02.205526 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.53s 2026-03-01 01:05:02.205529 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.08s 2026-03-01 01:05:02.205532 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.30s 2026-03-01 01:05:02.205535 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.20s 2026-03-01 01:05:02.205538 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2026-03-01 01:05:02.205541 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.17s 2026-03-01 01:05:02.205545 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.80s 2026-03-01 01:05:02.205548 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.77s 2026-03-01 01:05:02.205551 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.77s 2026-03-01 01:05:02.205554 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2026-03-01 01:05:02.205557 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.43s 2026-03-01 01:05:02.205560 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2026-03-01 01:05:02.205563 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-03-01 01:05:02.205566 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2026-03-01 01:05:02.205570 | orchestrator | 2026-03-01 01:05:02.205575 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-01 01:05:02.205579 | orchestrator | 2.16.14 2026-03-01 01:05:02.205582 | orchestrator | 2026-03-01 01:05:02.205585 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-01 01:05:02.205589 | orchestrator | 2026-03-01 01:05:02.205592 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-01 01:05:02.205595 | orchestrator | Sunday 01 March 2026 01:03:43 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-03-01 01:05:02.205598 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205601 | orchestrator | 2026-03-01 01:05:02.205605 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-01 01:05:02.205608 | orchestrator | Sunday 01 March 2026 01:03:45 +0000 (0:00:01.671) 0:00:01.876 ********** 2026-03-01 01:05:02.205611 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205614 | orchestrator | 2026-03-01 01:05:02.205617 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-01 01:05:02.205620 | orchestrator | Sunday 01 March 2026 01:03:46 +0000 (0:00:00.911) 0:00:02.788 ********** 2026-03-01 01:05:02.205623 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205626 | orchestrator | 2026-03-01 01:05:02.205630 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-01 01:05:02.205633 | orchestrator | Sunday 01 March 2026 01:03:47 +0000 (0:00:00.879) 0:00:03.668 ********** 2026-03-01 01:05:02.205636 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205639 | orchestrator | 2026-03-01 01:05:02.205642 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-01 01:05:02.205645 | orchestrator | Sunday 01 March 2026 01:03:48 +0000 (0:00:01.003) 0:00:04.671 ********** 2026-03-01 01:05:02.205649 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205652 | orchestrator | 2026-03-01 01:05:02.205655 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-01 01:05:02.205661 | orchestrator | Sunday 01 March 2026 01:03:49 +0000 (0:00:00.968) 0:00:05.640 ********** 2026-03-01 01:05:02.205664 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205667 | orchestrator | 2026-03-01 01:05:02.205670 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-01 01:05:02.205673 | orchestrator | Sunday 01 March 2026 01:03:50 +0000 (0:00:00.992) 0:00:06.633 ********** 2026-03-01 01:05:02.205676 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205679 | orchestrator | 2026-03-01 01:05:02.205683 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-01 01:05:02.205686 | orchestrator | Sunday 01 March 2026 01:03:51 +0000 (0:00:01.153) 0:00:07.786 ********** 2026-03-01 01:05:02.205689 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205692 | orchestrator | 2026-03-01 01:05:02.205695 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-01 01:05:02.205698 | orchestrator | Sunday 01 March 2026 01:03:52 +0000 (0:00:01.088) 0:00:08.875 ********** 2026-03-01 01:05:02.205701 | orchestrator | changed: [testbed-manager] 2026-03-01 01:05:02.205705 | orchestrator | 2026-03-01 01:05:02.205708 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-01 01:05:02.205711 | orchestrator | Sunday 01 March 2026 01:04:35 +0000 (0:00:42.620) 0:00:51.495 ********** 2026-03-01 01:05:02.205714 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:05:02.205717 | orchestrator | 2026-03-01 01:05:02.205720 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-01 01:05:02.205723 | orchestrator | 2026-03-01 01:05:02.205727 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-01 01:05:02.205730 | orchestrator | Sunday 01 March 2026 01:04:35 +0000 (0:00:00.142) 0:00:51.638 ********** 2026-03-01 01:05:02.205733 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:02.205736 | orchestrator | 2026-03-01 01:05:02.205739 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-01 01:05:02.205742 | orchestrator | 2026-03-01 01:05:02.205745 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-01 01:05:02.205749 | orchestrator | Sunday 01 March 2026 01:04:46 +0000 (0:00:11.503) 0:01:03.141 ********** 2026-03-01 01:05:02.205752 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:05:02.205755 | orchestrator | 2026-03-01 01:05:02.205760 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-01 01:05:02.205763 | orchestrator | 2026-03-01 01:05:02.205767 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-01 01:05:02.205770 | orchestrator | Sunday 01 March 2026 01:04:49 +0000 (0:00:02.365) 0:01:05.506 ********** 2026-03-01 01:05:02.205773 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:05:02.205776 | orchestrator | 2026-03-01 01:05:02.205779 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:05:02.205782 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-01 01:05:02.205786 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:05:02.205789 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:05:02.205792 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:05:02.205795 | orchestrator | 2026-03-01 01:05:02.205799 | orchestrator | 2026-03-01 01:05:02.205802 | orchestrator | 2026-03-01 01:05:02.205805 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:05:02.205808 | orchestrator | Sunday 01 March 2026 01:04:59 +0000 (0:00:10.919) 0:01:16.425 ********** 2026-03-01 01:05:02.205811 | orchestrator | =============================================================================== 2026-03-01 01:05:02.205816 | orchestrator | Create admin user ------------------------------------------------------ 42.62s 2026-03-01 01:05:02.205822 | orchestrator | Restart ceph manager service ------------------------------------------- 24.79s 2026-03-01 01:05:02.205825 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.67s 2026-03-01 01:05:02.205828 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.15s 2026-03-01 01:05:02.205831 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.09s 2026-03-01 01:05:02.205834 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.00s 2026-03-01 01:05:02.205837 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.99s 2026-03-01 01:05:02.205841 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2026-03-01 01:05:02.205844 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2026-03-01 01:05:02.205847 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.88s 2026-03-01 01:05:02.205850 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-03-01 01:05:02.205853 | orchestrator | 2026-03-01 01:05:02 | INFO  | Task 405abeee-f226-4d22-971c-718dafe112fd is in state SUCCESS 2026-03-01 01:05:02.205857 | orchestrator | 2026-03-01 01:05:02 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:02.205860 | orchestrator | 2026-03-01 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:05.227943 | orchestrator | 2026-03-01 01:05:05 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:05.228335 | orchestrator | 2026-03-01 01:05:05 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:05.228786 | orchestrator | 2026-03-01 01:05:05 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:05.229336 | orchestrator | 2026-03-01 01:05:05 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:05.229386 | orchestrator | 2026-03-01 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:08.258953 | orchestrator | 2026-03-01 01:05:08 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:08.259279 | orchestrator | 2026-03-01 01:05:08 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:08.259882 | orchestrator | 2026-03-01 01:05:08 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:08.260499 | orchestrator | 2026-03-01 01:05:08 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:08.260521 | orchestrator | 2026-03-01 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:11.298962 | orchestrator | 2026-03-01 01:05:11 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:11.299036 | orchestrator | 2026-03-01 01:05:11 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:11.299783 | orchestrator | 2026-03-01 01:05:11 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:11.300511 | orchestrator | 2026-03-01 01:05:11 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:11.300538 | orchestrator | 2026-03-01 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:14.329764 | orchestrator | 2026-03-01 01:05:14 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:14.330435 | orchestrator | 2026-03-01 01:05:14 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:14.330735 | orchestrator | 2026-03-01 01:05:14 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:14.331680 | orchestrator | 2026-03-01 01:05:14 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:14.331716 | orchestrator | 2026-03-01 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:17.360545 | orchestrator | 2026-03-01 01:05:17 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:17.361076 | orchestrator | 2026-03-01 01:05:17 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:17.361960 | orchestrator | 2026-03-01 01:05:17 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:17.363369 | orchestrator | 2026-03-01 01:05:17 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:17.363398 | orchestrator | 2026-03-01 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:20.395037 | orchestrator | 2026-03-01 01:05:20 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:20.396672 | orchestrator | 2026-03-01 01:05:20 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:20.398952 | orchestrator | 2026-03-01 01:05:20 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:20.399935 | orchestrator | 2026-03-01 01:05:20 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:20.399978 | orchestrator | 2026-03-01 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:23.437564 | orchestrator | 2026-03-01 01:05:23 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:23.437919 | orchestrator | 2026-03-01 01:05:23 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:23.439362 | orchestrator | 2026-03-01 01:05:23 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:23.440019 | orchestrator | 2026-03-01 01:05:23 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:23.440033 | orchestrator | 2026-03-01 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:26.470274 | orchestrator | 2026-03-01 01:05:26 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:26.470574 | orchestrator | 2026-03-01 01:05:26 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:26.471159 | orchestrator | 2026-03-01 01:05:26 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:26.474221 | orchestrator | 2026-03-01 01:05:26 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:26.474264 | orchestrator | 2026-03-01 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:29.494497 | orchestrator | 2026-03-01 01:05:29 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:29.494844 | orchestrator | 2026-03-01 01:05:29 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:29.495607 | orchestrator | 2026-03-01 01:05:29 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:29.496226 | orchestrator | 2026-03-01 01:05:29 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:29.496242 | orchestrator | 2026-03-01 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:32.530191 | orchestrator | 2026-03-01 01:05:32 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:32.532526 | orchestrator | 2026-03-01 01:05:32 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:32.534847 | orchestrator | 2026-03-01 01:05:32 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:32.537522 | orchestrator | 2026-03-01 01:05:32 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:32.537575 | orchestrator | 2026-03-01 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:35.562836 | orchestrator | 2026-03-01 01:05:35 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:35.563213 | orchestrator | 2026-03-01 01:05:35 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:35.566370 | orchestrator | 2026-03-01 01:05:35 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:35.566419 | orchestrator | 2026-03-01 01:05:35 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state STARTED 2026-03-01 01:05:35.566428 | orchestrator | 2026-03-01 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:38.594614 | orchestrator | 2026-03-01 01:05:38 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:38.595035 | orchestrator | 2026-03-01 01:05:38 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:38.595817 | orchestrator | 2026-03-01 01:05:38 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:38.597224 | orchestrator | 2026-03-01 01:05:38 | INFO  | Task 340b5554-1ccc-4b98-a5c9-8dcfb31761f2 is in state SUCCESS 2026-03-01 01:05:38.599869 | orchestrator | 2026-03-01 01:05:38.599896 | orchestrator | 2026-03-01 01:05:38.599903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:05:38.599909 | orchestrator | 2026-03-01 01:05:38.599915 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:05:38.599921 | orchestrator | Sunday 01 March 2026 01:03:40 +0000 (0:00:00.416) 0:00:00.416 ********** 2026-03-01 01:05:38.599927 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:05:38.599944 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:05:38.599950 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:05:38.599956 | orchestrator | 2026-03-01 01:05:38.600006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:05:38.600028 | orchestrator | Sunday 01 March 2026 01:03:41 +0000 (0:00:00.514) 0:00:00.931 ********** 2026-03-01 01:05:38.600034 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-01 01:05:38.600152 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-01 01:05:38.600158 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-01 01:05:38.600163 | orchestrator | 2026-03-01 01:05:38.600168 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-01 01:05:38.600172 | orchestrator | 2026-03-01 01:05:38.600177 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-01 01:05:38.600182 | orchestrator | Sunday 01 March 2026 01:03:41 +0000 (0:00:00.617) 0:00:01.548 ********** 2026-03-01 01:05:38.600188 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:05:38.600193 | orchestrator | 2026-03-01 01:05:38.600199 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-01 01:05:38.600204 | orchestrator | Sunday 01 March 2026 01:03:42 +0000 (0:00:00.456) 0:00:02.005 ********** 2026-03-01 01:05:38.600209 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-01 01:05:38.600214 | orchestrator | 2026-03-01 01:05:38.600218 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-01 01:05:38.600239 | orchestrator | Sunday 01 March 2026 01:03:46 +0000 (0:00:03.592) 0:00:05.598 ********** 2026-03-01 01:05:38.600244 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-01 01:05:38.600249 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-01 01:05:38.600254 | orchestrator | 2026-03-01 01:05:38.600259 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-01 01:05:38.600263 | orchestrator | Sunday 01 March 2026 01:03:52 +0000 (0:00:06.223) 0:00:11.822 ********** 2026-03-01 01:05:38.600268 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:05:38.600274 | orchestrator | 2026-03-01 01:05:38.600279 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-01 01:05:38.600315 | orchestrator | Sunday 01 March 2026 01:03:55 +0000 (0:00:02.892) 0:00:14.714 ********** 2026-03-01 01:05:38.600321 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-01 01:05:38.600326 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:05:38.600332 | orchestrator | 2026-03-01 01:05:38.600337 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-01 01:05:38.600342 | orchestrator | Sunday 01 March 2026 01:03:59 +0000 (0:00:03.932) 0:00:18.646 ********** 2026-03-01 01:05:38.600348 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:05:38.600353 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-01 01:05:38.600358 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-01 01:05:38.600364 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-01 01:05:38.600369 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-01 01:05:38.600374 | orchestrator | 2026-03-01 01:05:38.600379 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-01 01:05:38.600385 | orchestrator | Sunday 01 March 2026 01:04:14 +0000 (0:00:15.527) 0:00:34.173 ********** 2026-03-01 01:05:38.600390 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-01 01:05:38.600395 | orchestrator | 2026-03-01 01:05:38.600401 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-01 01:05:38.600406 | orchestrator | Sunday 01 March 2026 01:04:19 +0000 (0:00:05.144) 0:00:39.318 ********** 2026-03-01 01:05:38.600413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600496 | orchestrator | 2026-03-01 01:05:38.600502 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-01 01:05:38.600507 | orchestrator | Sunday 01 March 2026 01:04:22 +0000 (0:00:02.650) 0:00:41.968 ********** 2026-03-01 01:05:38.600513 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-01 01:05:38.600518 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-01 01:05:38.600523 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-01 01:05:38.600529 | orchestrator | 2026-03-01 01:05:38.600534 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-01 01:05:38.600539 | orchestrator | Sunday 01 March 2026 01:04:23 +0000 (0:00:00.813) 0:00:42.782 ********** 2026-03-01 01:05:38.600545 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.600550 | orchestrator | 2026-03-01 01:05:38.600555 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-01 01:05:38.600561 | orchestrator | Sunday 01 March 2026 01:04:23 +0000 (0:00:00.131) 0:00:42.914 ********** 2026-03-01 01:05:38.600566 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.600571 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:05:38.600576 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:05:38.600581 | orchestrator | 2026-03-01 01:05:38.600587 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-01 01:05:38.600593 | orchestrator | Sunday 01 March 2026 01:04:23 +0000 (0:00:00.464) 0:00:43.378 ********** 2026-03-01 01:05:38.600598 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:05:38.600603 | orchestrator | 2026-03-01 01:05:38.600607 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-01 01:05:38.600613 | orchestrator | Sunday 01 March 2026 01:04:24 +0000 (0:00:00.734) 0:00:44.112 ********** 2026-03-01 01:05:38.600618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600690 | orchestrator | 2026-03-01 01:05:38.600696 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-01 01:05:38.600701 | orchestrator | Sunday 01 March 2026 01:04:28 +0000 (0:00:03.676) 0:00:47.789 ********** 2026-03-01 01:05:38.600707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.600713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600724 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.600736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.600745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600755 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:05:38.600761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.600766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600780 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:05:38.600785 | orchestrator | 2026-03-01 01:05:38.600791 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-01 01:05:38.600796 | orchestrator | Sunday 01 March 2026 01:04:29 +0000 (0:00:01.544) 0:00:49.333 ********** 2026-03-01 01:05:38.600808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.600815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600826 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.600831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.600837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600855 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:05:38.600868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.600875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.600885 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:05:38.600891 | orchestrator | 2026-03-01 01:05:38.600896 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-01 01:05:38.600902 | orchestrator | Sunday 01 March 2026 01:04:31 +0000 (0:00:01.517) 0:00:50.851 ********** 2026-03-01 01:05:38.600908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 2026-03-01 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:38.600926 | orchestrator | 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.600939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.600987 | orchestrator | 2026-03-01 01:05:38.600993 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-01 01:05:38.600998 | orchestrator | Sunday 01 March 2026 01:04:34 +0000 (0:00:03.254) 0:00:54.105 ********** 2026-03-01 01:05:38.601004 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:05:38.601009 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601014 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:05:38.601020 | orchestrator | 2026-03-01 01:05:38.601025 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-01 01:05:38.601031 | orchestrator | Sunday 01 March 2026 01:04:37 +0000 (0:00:03.026) 0:00:57.132 ********** 2026-03-01 01:05:38.601037 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:05:38.601043 | orchestrator | 2026-03-01 01:05:38.601049 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-01 01:05:38.601054 | orchestrator | Sunday 01 March 2026 01:04:38 +0000 (0:00:00.995) 0:00:58.127 ********** 2026-03-01 01:05:38.601060 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.601065 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:05:38.601071 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:05:38.601077 | orchestrator | 2026-03-01 01:05:38.601082 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-01 01:05:38.601088 | orchestrator | Sunday 01 March 2026 01:04:39 +0000 (0:00:01.165) 0:00:59.293 ********** 2026-03-01 01:05:38.601094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.601104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.601118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.601124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601210 | orchestrator | 2026-03-01 01:05:38.601220 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-01 01:05:38.601226 | orchestrator | Sunday 01 March 2026 01:04:49 +0000 (0:00:09.913) 0:01:09.207 ********** 2026-03-01 01:05:38.601235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.601241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.601372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.601382 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.601389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.601395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.601410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.601416 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:05:38.601421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-01 01:05:38.601430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.601436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:05:38.601441 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:05:38.601446 | orchestrator | 2026-03-01 01:05:38.601452 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-01 01:05:38.601457 | orchestrator | Sunday 01 March 2026 01:04:51 +0000 (0:00:01.519) 0:01:10.726 ********** 2026-03-01 01:05:38.601462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.601474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.601480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-01 01:05:38.601488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:05:38.601536 | orchestrator | 2026-03-01 01:05:38.601541 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-01 01:05:38.601546 | orchestrator | Sunday 01 March 2026 01:04:54 +0000 (0:00:03.119) 0:01:13.846 ********** 2026-03-01 01:05:38.601551 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:05:38.601557 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:05:38.601562 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:05:38.601567 | orchestrator | 2026-03-01 01:05:38.601573 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-01 01:05:38.601578 | orchestrator | Sunday 01 March 2026 01:04:54 +0000 (0:00:00.358) 0:01:14.204 ********** 2026-03-01 01:05:38.601583 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601589 | orchestrator | 2026-03-01 01:05:38.601594 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-01 01:05:38.601599 | orchestrator | Sunday 01 March 2026 01:04:57 +0000 (0:00:02.684) 0:01:16.888 ********** 2026-03-01 01:05:38.601604 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601609 | orchestrator | 2026-03-01 01:05:38.601615 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-01 01:05:38.601620 | orchestrator | Sunday 01 March 2026 01:04:59 +0000 (0:00:02.493) 0:01:19.382 ********** 2026-03-01 01:05:38.601626 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601631 | orchestrator | 2026-03-01 01:05:38.601636 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-01 01:05:38.601641 | orchestrator | Sunday 01 March 2026 01:05:10 +0000 (0:00:10.492) 0:01:29.875 ********** 2026-03-01 01:05:38.601646 | orchestrator | 2026-03-01 01:05:38.601652 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-01 01:05:38.601657 | orchestrator | Sunday 01 March 2026 01:05:10 +0000 (0:00:00.116) 0:01:29.991 ********** 2026-03-01 01:05:38.601662 | orchestrator | 2026-03-01 01:05:38.601667 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-01 01:05:38.601673 | orchestrator | Sunday 01 March 2026 01:05:10 +0000 (0:00:00.063) 0:01:30.054 ********** 2026-03-01 01:05:38.601678 | orchestrator | 2026-03-01 01:05:38.601683 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-01 01:05:38.601688 | orchestrator | Sunday 01 March 2026 01:05:10 +0000 (0:00:00.058) 0:01:30.113 ********** 2026-03-01 01:05:38.601694 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601699 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:05:38.601704 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:05:38.601709 | orchestrator | 2026-03-01 01:05:38.601714 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-01 01:05:38.601719 | orchestrator | Sunday 01 March 2026 01:05:16 +0000 (0:00:06.357) 0:01:36.471 ********** 2026-03-01 01:05:38.601725 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601730 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:05:38.601735 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:05:38.601740 | orchestrator | 2026-03-01 01:05:38.601745 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-01 01:05:38.601750 | orchestrator | Sunday 01 March 2026 01:05:27 +0000 (0:00:10.449) 0:01:46.921 ********** 2026-03-01 01:05:38.601755 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:05:38.601761 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:05:38.601766 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:05:38.601771 | orchestrator | 2026-03-01 01:05:38.601776 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:05:38.601782 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:05:38.601792 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-01 01:05:38.601797 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-01 01:05:38.601802 | orchestrator | 2026-03-01 01:05:38.601808 | orchestrator | 2026-03-01 01:05:38.601813 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:05:38.601818 | orchestrator | Sunday 01 March 2026 01:05:37 +0000 (0:00:10.314) 0:01:57.236 ********** 2026-03-01 01:05:38.601824 | orchestrator | =============================================================================== 2026-03-01 01:05:38.601832 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.53s 2026-03-01 01:05:38.601837 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.49s 2026-03-01 01:05:38.601843 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.45s 2026-03-01 01:05:38.601850 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.32s 2026-03-01 01:05:38.601855 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.91s 2026-03-01 01:05:38.601861 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.36s 2026-03-01 01:05:38.601866 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.22s 2026-03-01 01:05:38.601871 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.14s 2026-03-01 01:05:38.601877 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2026-03-01 01:05:38.601882 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.68s 2026-03-01 01:05:38.601887 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.59s 2026-03-01 01:05:38.601892 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.25s 2026-03-01 01:05:38.601898 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.12s 2026-03-01 01:05:38.601903 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.03s 2026-03-01 01:05:38.601908 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.89s 2026-03-01 01:05:38.601913 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.68s 2026-03-01 01:05:38.601918 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.65s 2026-03-01 01:05:38.601924 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.49s 2026-03-01 01:05:38.601929 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.54s 2026-03-01 01:05:38.601935 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.52s 2026-03-01 01:05:41.625850 | orchestrator | 2026-03-01 01:05:41 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:41.625994 | orchestrator | 2026-03-01 01:05:41 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:41.626655 | orchestrator | 2026-03-01 01:05:41 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:41.627181 | orchestrator | 2026-03-01 01:05:41 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:41.627211 | orchestrator | 2026-03-01 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:44.652349 | orchestrator | 2026-03-01 01:05:44 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:44.653130 | orchestrator | 2026-03-01 01:05:44 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:44.653525 | orchestrator | 2026-03-01 01:05:44 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:44.654853 | orchestrator | 2026-03-01 01:05:44 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:44.654888 | orchestrator | 2026-03-01 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:47.684916 | orchestrator | 2026-03-01 01:05:47 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:47.685094 | orchestrator | 2026-03-01 01:05:47 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:47.686889 | orchestrator | 2026-03-01 01:05:47 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:47.688224 | orchestrator | 2026-03-01 01:05:47 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:47.688331 | orchestrator | 2026-03-01 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:50.713191 | orchestrator | 2026-03-01 01:05:50 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:50.714765 | orchestrator | 2026-03-01 01:05:50 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:50.715920 | orchestrator | 2026-03-01 01:05:50 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:50.716817 | orchestrator | 2026-03-01 01:05:50 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:50.717022 | orchestrator | 2026-03-01 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:53.747464 | orchestrator | 2026-03-01 01:05:53 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:53.748558 | orchestrator | 2026-03-01 01:05:53 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:53.749893 | orchestrator | 2026-03-01 01:05:53 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:53.751331 | orchestrator | 2026-03-01 01:05:53 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:53.751426 | orchestrator | 2026-03-01 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:56.785531 | orchestrator | 2026-03-01 01:05:56 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:56.786471 | orchestrator | 2026-03-01 01:05:56 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:56.788663 | orchestrator | 2026-03-01 01:05:56 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:56.790871 | orchestrator | 2026-03-01 01:05:56 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:56.790917 | orchestrator | 2026-03-01 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:05:59.830042 | orchestrator | 2026-03-01 01:05:59 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:05:59.832632 | orchestrator | 2026-03-01 01:05:59 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:05:59.833371 | orchestrator | 2026-03-01 01:05:59 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:05:59.834470 | orchestrator | 2026-03-01 01:05:59 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:05:59.834502 | orchestrator | 2026-03-01 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:02.872636 | orchestrator | 2026-03-01 01:06:02 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:02.874632 | orchestrator | 2026-03-01 01:06:02 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:06:02.876700 | orchestrator | 2026-03-01 01:06:02 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:02.877763 | orchestrator | 2026-03-01 01:06:02 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:02.877861 | orchestrator | 2026-03-01 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:05.919212 | orchestrator | 2026-03-01 01:06:05 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:05.921054 | orchestrator | 2026-03-01 01:06:05 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:06:05.923152 | orchestrator | 2026-03-01 01:06:05 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:05.924362 | orchestrator | 2026-03-01 01:06:05 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:05.924395 | orchestrator | 2026-03-01 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:08.966732 | orchestrator | 2026-03-01 01:06:08 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:08.969998 | orchestrator | 2026-03-01 01:06:08 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:06:08.971582 | orchestrator | 2026-03-01 01:06:08 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:08.973686 | orchestrator | 2026-03-01 01:06:08 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:08.973736 | orchestrator | 2026-03-01 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:12.020792 | orchestrator | 2026-03-01 01:06:12 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:12.022343 | orchestrator | 2026-03-01 01:06:12 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:06:12.024376 | orchestrator | 2026-03-01 01:06:12 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:12.025380 | orchestrator | 2026-03-01 01:06:12 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:12.025410 | orchestrator | 2026-03-01 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:15.057846 | orchestrator | 2026-03-01 01:06:15 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:15.060212 | orchestrator | 2026-03-01 01:06:15 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:06:15.062759 | orchestrator | 2026-03-01 01:06:15 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:15.065085 | orchestrator | 2026-03-01 01:06:15 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:15.065144 | orchestrator | 2026-03-01 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:18.102636 | orchestrator | 2026-03-01 01:06:18 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:18.104536 | orchestrator | 2026-03-01 01:06:18 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state STARTED 2026-03-01 01:06:18.107803 | orchestrator | 2026-03-01 01:06:18 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:18.110204 | orchestrator | 2026-03-01 01:06:18 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:18.110279 | orchestrator | 2026-03-01 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:21.174288 | orchestrator | 2026-03-01 01:06:21 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:21.181091 | orchestrator | 2026-03-01 01:06:21 | INFO  | Task c8c97b57-cd30-420e-9e11-80eb573c09ea is in state SUCCESS 2026-03-01 01:06:21.182575 | orchestrator | 2026-03-01 01:06:21.182624 | orchestrator | 2026-03-01 01:06:21.182636 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:06:21.182644 | orchestrator | 2026-03-01 01:06:21.182651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:06:21.182659 | orchestrator | Sunday 01 March 2026 01:03:40 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-03-01 01:06:21.182666 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:06:21.182674 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:06:21.182681 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:06:21.182688 | orchestrator | 2026-03-01 01:06:21.182694 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:06:21.182701 | orchestrator | Sunday 01 March 2026 01:03:40 +0000 (0:00:00.266) 0:00:00.509 ********** 2026-03-01 01:06:21.182709 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-01 01:06:21.182716 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-01 01:06:21.182722 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-01 01:06:21.182729 | orchestrator | 2026-03-01 01:06:21.182737 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-01 01:06:21.182744 | orchestrator | 2026-03-01 01:06:21.182751 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-01 01:06:21.182758 | orchestrator | Sunday 01 March 2026 01:03:41 +0000 (0:00:00.441) 0:00:00.950 ********** 2026-03-01 01:06:21.182765 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:06:21.182774 | orchestrator | 2026-03-01 01:06:21.182781 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-01 01:06:21.182787 | orchestrator | Sunday 01 March 2026 01:03:42 +0000 (0:00:00.682) 0:00:01.633 ********** 2026-03-01 01:06:21.182792 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-01 01:06:21.182796 | orchestrator | 2026-03-01 01:06:21.182800 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-01 01:06:21.182804 | orchestrator | Sunday 01 March 2026 01:03:45 +0000 (0:00:03.388) 0:00:05.022 ********** 2026-03-01 01:06:21.182808 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-01 01:06:21.182813 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-01 01:06:21.182817 | orchestrator | 2026-03-01 01:06:21.182879 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-01 01:06:21.182885 | orchestrator | Sunday 01 March 2026 01:03:51 +0000 (0:00:06.277) 0:00:11.299 ********** 2026-03-01 01:06:21.182889 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-01 01:06:21.182902 | orchestrator | 2026-03-01 01:06:21.182907 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-01 01:06:21.182911 | orchestrator | Sunday 01 March 2026 01:03:54 +0000 (0:00:02.945) 0:00:14.245 ********** 2026-03-01 01:06:21.182921 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-01 01:06:21.182925 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:06:21.182929 | orchestrator | 2026-03-01 01:06:21.182933 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-01 01:06:21.182937 | orchestrator | Sunday 01 March 2026 01:03:58 +0000 (0:00:03.719) 0:00:17.965 ********** 2026-03-01 01:06:21.182941 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:06:21.182967 | orchestrator | 2026-03-01 01:06:21.182971 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-01 01:06:21.182975 | orchestrator | Sunday 01 March 2026 01:04:01 +0000 (0:00:03.116) 0:00:21.081 ********** 2026-03-01 01:06:21.182979 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-01 01:06:21.182983 | orchestrator | 2026-03-01 01:06:21.182987 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-01 01:06:21.182992 | orchestrator | Sunday 01 March 2026 01:04:05 +0000 (0:00:03.550) 0:00:24.631 ********** 2026-03-01 01:06:21.183008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.183028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.183033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.183038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183506 | orchestrator | 2026-03-01 01:06:21.183511 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-01 01:06:21.183516 | orchestrator | Sunday 01 March 2026 01:04:07 +0000 (0:00:02.826) 0:00:27.458 ********** 2026-03-01 01:06:21.183520 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.183525 | orchestrator | 2026-03-01 01:06:21.183529 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-01 01:06:21.183533 | orchestrator | Sunday 01 March 2026 01:04:08 +0000 (0:00:00.123) 0:00:27.581 ********** 2026-03-01 01:06:21.183537 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.183541 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:06:21.183545 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:06:21.183549 | orchestrator | 2026-03-01 01:06:21.183553 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-01 01:06:21.183557 | orchestrator | Sunday 01 March 2026 01:04:08 +0000 (0:00:00.265) 0:00:27.847 ********** 2026-03-01 01:06:21.183561 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:06:21.183565 | orchestrator | 2026-03-01 01:06:21.183569 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-01 01:06:21.183573 | orchestrator | Sunday 01 March 2026 01:04:08 +0000 (0:00:00.629) 0:00:28.476 ********** 2026-03-01 01:06:21.183582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.183593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.183597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.183605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.183717 | orchestrator | 2026-03-01 01:06:21.183722 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-01 01:06:21.183726 | orchestrator | Sunday 01 March 2026 01:04:14 +0000 (0:00:05.566) 0:00:34.042 ********** 2026-03-01 01:06:21.183732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.183737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.183744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183768 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.183799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.183804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.183812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183833 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:06:21.183837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.183844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.183861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.183905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184174 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:06:21.184178 | orchestrator | 2026-03-01 01:06:21.184182 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-01 01:06:21.184186 | orchestrator | Sunday 01 March 2026 01:04:15 +0000 (0:00:00.716) 0:00:34.759 ********** 2026-03-01 01:06:21.184191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.184198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.184233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184257 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.184261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.184269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.184286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184364 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:06:21.184371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.184382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.184390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184444 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:06:21.184450 | orchestrator | 2026-03-01 01:06:21.184456 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-01 01:06:21.184463 | orchestrator | Sunday 01 March 2026 01:04:16 +0000 (0:00:01.118) 0:00:35.878 ********** 2026-03-01 01:06:21.184469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.184479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.184508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.184514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184652 | orchestrator | 2026-03-01 01:06:21.184657 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-01 01:06:21.184660 | orchestrator | Sunday 01 March 2026 01:04:23 +0000 (0:00:06.924) 0:00:42.803 ********** 2026-03-01 01:06:21.184665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.184669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.184680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.184696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184813 | orchestrator | 2026-03-01 01:06:21.184820 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-01 01:06:21.184825 | orchestrator | Sunday 01 March 2026 01:04:42 +0000 (0:00:19.500) 0:01:02.303 ********** 2026-03-01 01:06:21.184832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-01 01:06:21.184839 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-01 01:06:21.184846 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-01 01:06:21.184852 | orchestrator | 2026-03-01 01:06:21.184858 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-01 01:06:21.184864 | orchestrator | Sunday 01 March 2026 01:04:48 +0000 (0:00:05.906) 0:01:08.211 ********** 2026-03-01 01:06:21.184870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-01 01:06:21.184877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-01 01:06:21.184884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-01 01:06:21.184891 | orchestrator | 2026-03-01 01:06:21.184898 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-01 01:06:21.184910 | orchestrator | Sunday 01 March 2026 01:04:52 +0000 (0:00:03.640) 0:01:11.851 ********** 2026-03-01 01:06:21.184918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.184929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.184938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.184943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.184995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.184999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185033 | orchestrator | 2026-03-01 01:06:21.185038 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-01 01:06:21.185042 | orchestrator | Sunday 01 March 2026 01:04:55 +0000 (0:00:03.572) 0:01:15.423 ********** 2026-03-01 01:06:21.185048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.185055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.185067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.185079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185207 | orchestrator | 2026-03-01 01:06:21.185239 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-01 01:06:21.185246 | orchestrator | Sunday 01 March 2026 01:04:58 +0000 (0:00:02.962) 0:01:18.387 ********** 2026-03-01 01:06:21.185253 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.185260 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:06:21.185267 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:06:21.185273 | orchestrator | 2026-03-01 01:06:21.185280 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-01 01:06:21.185287 | orchestrator | Sunday 01 March 2026 01:04:59 +0000 (0:00:00.564) 0:01:18.952 ********** 2026-03-01 01:06:21.185295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.185303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.185311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185359 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:06:21.185367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.185372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.185377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185407 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.185414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-01 01:06:21.185420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-01 01:06:21.185427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:06:21.185476 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:06:21.185482 | orchestrator | 2026-03-01 01:06:21.185488 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-01 01:06:21.185494 | orchestrator | Sunday 01 March 2026 01:05:00 +0000 (0:00:01.566) 0:01:20.518 ********** 2026-03-01 01:06:21.185500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.185506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.185516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-01 01:06:21.185522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:06:21.185725 | orchestrator | 2026-03-01 01:06:21.185730 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-01 01:06:21.185735 | orchestrator | Sunday 01 March 2026 01:05:05 +0000 (0:00:04.386) 0:01:24.905 ********** 2026-03-01 01:06:21.185740 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:06:21.185745 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:06:21.185750 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:06:21.185754 | orchestrator | 2026-03-01 01:06:21.185758 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-01 01:06:21.185763 | orchestrator | Sunday 01 March 2026 01:05:05 +0000 (0:00:00.536) 0:01:25.441 ********** 2026-03-01 01:06:21.185768 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-01 01:06:21.185772 | orchestrator | 2026-03-01 01:06:21.185777 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-01 01:06:21.185782 | orchestrator | Sunday 01 March 2026 01:05:07 +0000 (0:00:01.976) 0:01:27.417 ********** 2026-03-01 01:06:21.185787 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:06:21.185792 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-01 01:06:21.185796 | orchestrator | 2026-03-01 01:06:21.185800 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-01 01:06:21.185805 | orchestrator | Sunday 01 March 2026 01:05:09 +0000 (0:00:02.089) 0:01:29.507 ********** 2026-03-01 01:06:21.185810 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.185815 | orchestrator | 2026-03-01 01:06:21.185819 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-01 01:06:21.185824 | orchestrator | Sunday 01 March 2026 01:05:24 +0000 (0:00:14.213) 0:01:43.721 ********** 2026-03-01 01:06:21.185828 | orchestrator | 2026-03-01 01:06:21.185833 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-01 01:06:21.185838 | orchestrator | Sunday 01 March 2026 01:05:24 +0000 (0:00:00.067) 0:01:43.788 ********** 2026-03-01 01:06:21.185843 | orchestrator | 2026-03-01 01:06:21.185847 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-01 01:06:21.185852 | orchestrator | Sunday 01 March 2026 01:05:24 +0000 (0:00:00.060) 0:01:43.849 ********** 2026-03-01 01:06:21.185857 | orchestrator | 2026-03-01 01:06:21.185861 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-01 01:06:21.185866 | orchestrator | Sunday 01 March 2026 01:05:24 +0000 (0:00:00.063) 0:01:43.913 ********** 2026-03-01 01:06:21.185870 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.185875 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:06:21.185880 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:06:21.185885 | orchestrator | 2026-03-01 01:06:21.185889 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-01 01:06:21.185894 | orchestrator | Sunday 01 March 2026 01:05:37 +0000 (0:00:13.107) 0:01:57.021 ********** 2026-03-01 01:06:21.185898 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.185906 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:06:21.185911 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:06:21.185915 | orchestrator | 2026-03-01 01:06:21.185920 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-01 01:06:21.185925 | orchestrator | Sunday 01 March 2026 01:05:44 +0000 (0:00:07.217) 0:02:04.238 ********** 2026-03-01 01:06:21.185929 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.185934 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:06:21.185939 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:06:21.185943 | orchestrator | 2026-03-01 01:06:21.185948 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-01 01:06:21.185952 | orchestrator | Sunday 01 March 2026 01:05:51 +0000 (0:00:06.583) 0:02:10.821 ********** 2026-03-01 01:06:21.185957 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:06:21.185961 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:06:21.185966 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.185970 | orchestrator | 2026-03-01 01:06:21.185975 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-01 01:06:21.185980 | orchestrator | Sunday 01 March 2026 01:05:59 +0000 (0:00:07.904) 0:02:18.725 ********** 2026-03-01 01:06:21.185985 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:06:21.185989 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:06:21.185994 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.185998 | orchestrator | 2026-03-01 01:06:21.186003 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-01 01:06:21.186008 | orchestrator | Sunday 01 March 2026 01:06:07 +0000 (0:00:08.159) 0:02:26.885 ********** 2026-03-01 01:06:21.186071 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.186078 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:06:21.186083 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:06:21.186088 | orchestrator | 2026-03-01 01:06:21.186095 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-01 01:06:21.186100 | orchestrator | Sunday 01 March 2026 01:06:12 +0000 (0:00:05.086) 0:02:31.971 ********** 2026-03-01 01:06:21.186105 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:06:21.186110 | orchestrator | 2026-03-01 01:06:21.186114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:06:21.186119 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:06:21.186126 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-01 01:06:21.186131 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-01 01:06:21.186136 | orchestrator | 2026-03-01 01:06:21.186141 | orchestrator | 2026-03-01 01:06:21.186150 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:06:21.186154 | orchestrator | Sunday 01 March 2026 01:06:19 +0000 (0:00:07.190) 0:02:39.161 ********** 2026-03-01 01:06:21.186159 | orchestrator | =============================================================================== 2026-03-01 01:06:21.186164 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.50s 2026-03-01 01:06:21.186168 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.21s 2026-03-01 01:06:21.186173 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.11s 2026-03-01 01:06:21.186177 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.16s 2026-03-01 01:06:21.186182 | orchestrator | designate : Restart designate-producer container ------------------------ 7.90s 2026-03-01 01:06:21.186186 | orchestrator | designate : Restart designate-api container ----------------------------- 7.22s 2026-03-01 01:06:21.186191 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.19s 2026-03-01 01:06:21.186200 | orchestrator | designate : Copying over config.json files for services ----------------- 6.93s 2026-03-01 01:06:21.186205 | orchestrator | designate : Restart designate-central container ------------------------- 6.58s 2026-03-01 01:06:21.186328 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.28s 2026-03-01 01:06:21.186351 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.91s 2026-03-01 01:06:21.186359 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.57s 2026-03-01 01:06:21.186366 | orchestrator | designate : Restart designate-worker container -------------------------- 5.09s 2026-03-01 01:06:21.186372 | orchestrator | designate : Check designate containers ---------------------------------- 4.39s 2026-03-01 01:06:21.186376 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.72s 2026-03-01 01:06:21.186380 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.64s 2026-03-01 01:06:21.186385 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.57s 2026-03-01 01:06:21.186390 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.55s 2026-03-01 01:06:21.186395 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.39s 2026-03-01 01:06:21.186399 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.12s 2026-03-01 01:06:21.186404 | orchestrator | 2026-03-01 01:06:21 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:21.186760 | orchestrator | 2026-03-01 01:06:21 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state STARTED 2026-03-01 01:06:21.188271 | orchestrator | 2026-03-01 01:06:21 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:21.188315 | orchestrator | 2026-03-01 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:24.220766 | orchestrator | 2026-03-01 01:06:24 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:24.224418 | orchestrator | 2026-03-01 01:06:24 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:24.225891 | orchestrator | 2026-03-01 01:06:24 | INFO  | Task b417e241-7bd4-4354-8355-794700b86f0b is in state SUCCESS 2026-03-01 01:06:24.228303 | orchestrator | 2026-03-01 01:06:24 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:24.230498 | orchestrator | 2026-03-01 01:06:24 | INFO  | Task 8c979126-5bdc-411e-b903-5a2c12e9669e is in state STARTED 2026-03-01 01:06:24.230824 | orchestrator | 2026-03-01 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:27.269661 | orchestrator | 2026-03-01 01:06:27 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:27.270649 | orchestrator | 2026-03-01 01:06:27 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:27.271869 | orchestrator | 2026-03-01 01:06:27 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:27.272462 | orchestrator | 2026-03-01 01:06:27 | INFO  | Task 8c979126-5bdc-411e-b903-5a2c12e9669e is in state STARTED 2026-03-01 01:06:27.272479 | orchestrator | 2026-03-01 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:30.318827 | orchestrator | 2026-03-01 01:06:30 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:30.320753 | orchestrator | 2026-03-01 01:06:30 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:30.323385 | orchestrator | 2026-03-01 01:06:30 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:30.327969 | orchestrator | 2026-03-01 01:06:30 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:30.329260 | orchestrator | 2026-03-01 01:06:30 | INFO  | Task 8c979126-5bdc-411e-b903-5a2c12e9669e is in state SUCCESS 2026-03-01 01:06:30.329298 | orchestrator | 2026-03-01 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:33.369981 | orchestrator | 2026-03-01 01:06:33 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:33.370083 | orchestrator | 2026-03-01 01:06:33 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:33.370758 | orchestrator | 2026-03-01 01:06:33 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:33.371801 | orchestrator | 2026-03-01 01:06:33 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:33.371839 | orchestrator | 2026-03-01 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:36.403706 | orchestrator | 2026-03-01 01:06:36 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:36.403809 | orchestrator | 2026-03-01 01:06:36 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:36.404498 | orchestrator | 2026-03-01 01:06:36 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:36.405007 | orchestrator | 2026-03-01 01:06:36 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:36.405036 | orchestrator | 2026-03-01 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:39.432762 | orchestrator | 2026-03-01 01:06:39 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:39.432850 | orchestrator | 2026-03-01 01:06:39 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:39.433601 | orchestrator | 2026-03-01 01:06:39 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:39.434538 | orchestrator | 2026-03-01 01:06:39 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:39.434577 | orchestrator | 2026-03-01 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:42.469109 | orchestrator | 2026-03-01 01:06:42 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:42.472326 | orchestrator | 2026-03-01 01:06:42 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:42.472645 | orchestrator | 2026-03-01 01:06:42 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:42.473759 | orchestrator | 2026-03-01 01:06:42 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:42.473799 | orchestrator | 2026-03-01 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:45.522205 | orchestrator | 2026-03-01 01:06:45 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:45.522609 | orchestrator | 2026-03-01 01:06:45 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:45.524411 | orchestrator | 2026-03-01 01:06:45 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:45.525857 | orchestrator | 2026-03-01 01:06:45 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:45.525897 | orchestrator | 2026-03-01 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:48.571744 | orchestrator | 2026-03-01 01:06:48 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:48.574589 | orchestrator | 2026-03-01 01:06:48 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:48.577683 | orchestrator | 2026-03-01 01:06:48 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:48.579761 | orchestrator | 2026-03-01 01:06:48 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:48.579812 | orchestrator | 2026-03-01 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:51.616811 | orchestrator | 2026-03-01 01:06:51 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:51.616855 | orchestrator | 2026-03-01 01:06:51 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:51.617479 | orchestrator | 2026-03-01 01:06:51 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:51.621654 | orchestrator | 2026-03-01 01:06:51 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:51.621710 | orchestrator | 2026-03-01 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:54.657705 | orchestrator | 2026-03-01 01:06:54 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:54.658297 | orchestrator | 2026-03-01 01:06:54 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:54.661009 | orchestrator | 2026-03-01 01:06:54 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:54.662542 | orchestrator | 2026-03-01 01:06:54 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:54.662578 | orchestrator | 2026-03-01 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:06:57.701228 | orchestrator | 2026-03-01 01:06:57 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:06:57.702341 | orchestrator | 2026-03-01 01:06:57 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:06:57.703097 | orchestrator | 2026-03-01 01:06:57 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:06:57.703783 | orchestrator | 2026-03-01 01:06:57 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:06:57.703823 | orchestrator | 2026-03-01 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:00.744065 | orchestrator | 2026-03-01 01:07:00 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:00.744242 | orchestrator | 2026-03-01 01:07:00 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:00.744259 | orchestrator | 2026-03-01 01:07:00 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:00.744267 | orchestrator | 2026-03-01 01:07:00 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:00.744274 | orchestrator | 2026-03-01 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:03.764359 | orchestrator | 2026-03-01 01:07:03 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:03.764810 | orchestrator | 2026-03-01 01:07:03 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:03.765656 | orchestrator | 2026-03-01 01:07:03 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:03.766987 | orchestrator | 2026-03-01 01:07:03 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:03.767051 | orchestrator | 2026-03-01 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:06.810177 | orchestrator | 2026-03-01 01:07:06 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:06.812561 | orchestrator | 2026-03-01 01:07:06 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:06.816214 | orchestrator | 2026-03-01 01:07:06 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:06.817358 | orchestrator | 2026-03-01 01:07:06 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:06.817397 | orchestrator | 2026-03-01 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:09.842699 | orchestrator | 2026-03-01 01:07:09 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:09.843842 | orchestrator | 2026-03-01 01:07:09 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:09.844547 | orchestrator | 2026-03-01 01:07:09 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:09.845261 | orchestrator | 2026-03-01 01:07:09 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:09.845378 | orchestrator | 2026-03-01 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:12.876932 | orchestrator | 2026-03-01 01:07:12 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:12.879545 | orchestrator | 2026-03-01 01:07:12 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:12.879598 | orchestrator | 2026-03-01 01:07:12 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:12.880372 | orchestrator | 2026-03-01 01:07:12 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:12.880552 | orchestrator | 2026-03-01 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:15.912092 | orchestrator | 2026-03-01 01:07:15 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:15.914806 | orchestrator | 2026-03-01 01:07:15 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:15.918586 | orchestrator | 2026-03-01 01:07:15 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:15.921996 | orchestrator | 2026-03-01 01:07:15 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:15.922192 | orchestrator | 2026-03-01 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:18.952781 | orchestrator | 2026-03-01 01:07:18 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:18.954286 | orchestrator | 2026-03-01 01:07:18 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:18.954332 | orchestrator | 2026-03-01 01:07:18 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:18.958890 | orchestrator | 2026-03-01 01:07:18 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:18.958952 | orchestrator | 2026-03-01 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:21.987720 | orchestrator | 2026-03-01 01:07:21 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:21.988330 | orchestrator | 2026-03-01 01:07:21 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:21.990038 | orchestrator | 2026-03-01 01:07:21 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:21.990696 | orchestrator | 2026-03-01 01:07:21 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:21.990720 | orchestrator | 2026-03-01 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:25.039485 | orchestrator | 2026-03-01 01:07:25 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:25.042263 | orchestrator | 2026-03-01 01:07:25 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:25.044527 | orchestrator | 2026-03-01 01:07:25 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:25.047063 | orchestrator | 2026-03-01 01:07:25 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:25.047561 | orchestrator | 2026-03-01 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:28.097841 | orchestrator | 2026-03-01 01:07:28 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:28.099046 | orchestrator | 2026-03-01 01:07:28 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:28.100079 | orchestrator | 2026-03-01 01:07:28 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:28.101088 | orchestrator | 2026-03-01 01:07:28 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:28.101800 | orchestrator | 2026-03-01 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:31.136633 | orchestrator | 2026-03-01 01:07:31 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:31.136804 | orchestrator | 2026-03-01 01:07:31 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state STARTED 2026-03-01 01:07:31.137780 | orchestrator | 2026-03-01 01:07:31 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:31.138589 | orchestrator | 2026-03-01 01:07:31 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:31.138615 | orchestrator | 2026-03-01 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:34.193181 | orchestrator | 2026-03-01 01:07:34 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:34.194422 | orchestrator | 2026-03-01 01:07:34 | INFO  | Task e75d0248-3ab5-42f4-b1bb-bc3276dd68f7 is in state SUCCESS 2026-03-01 01:07:34.195791 | orchestrator | 2026-03-01 01:07:34.195853 | orchestrator | 2026-03-01 01:07:34.195862 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-01 01:07:34.195868 | orchestrator | 2026-03-01 01:07:34.195873 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-01 01:07:34.195879 | orchestrator | Sunday 01 March 2026 01:03:41 +0000 (0:00:00.108) 0:00:00.108 ********** 2026-03-01 01:07:34.195885 | orchestrator | changed: [localhost] 2026-03-01 01:07:34.195891 | orchestrator | 2026-03-01 01:07:34.195897 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-01 01:07:34.195903 | orchestrator | Sunday 01 March 2026 01:03:42 +0000 (0:00:01.232) 0:00:01.341 ********** 2026-03-01 01:07:34.195909 | orchestrator | changed: [localhost] 2026-03-01 01:07:34.195915 | orchestrator | 2026-03-01 01:07:34.195919 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-01 01:07:34.195922 | orchestrator | Sunday 01 March 2026 01:05:04 +0000 (0:01:21.826) 0:01:23.167 ********** 2026-03-01 01:07:34.195925 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-01 01:07:34.195929 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-01 01:07:34.195944 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2026-03-01 01:07:34.195948 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.kernel", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.kernel"} 2026-03-01 01:07:34.195952 | orchestrator | 2026-03-01 01:07:34.195956 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:07:34.195959 | orchestrator | localhost : ok=2  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-03-01 01:07:34.195963 | orchestrator | 2026-03-01 01:07:34.195966 | orchestrator | 2026-03-01 01:07:34.195969 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:07:34.195972 | orchestrator | Sunday 01 March 2026 01:06:21 +0000 (0:01:17.520) 0:02:40.687 ********** 2026-03-01 01:07:34.195975 | orchestrator | =============================================================================== 2026-03-01 01:07:34.195978 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 81.83s 2026-03-01 01:07:34.195982 | orchestrator | Download ironic-agent kernel ------------------------------------------- 77.52s 2026-03-01 01:07:34.195985 | orchestrator | Ensure the destination directory exists --------------------------------- 1.23s 2026-03-01 01:07:34.195988 | orchestrator | 2026-03-01 01:07:34.195991 | orchestrator | 2026-03-01 01:07:34.195994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:07:34.195997 | orchestrator | 2026-03-01 01:07:34.196000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:07:34.196003 | orchestrator | Sunday 01 March 2026 01:06:26 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-01 01:07:34.196006 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:34.196009 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:34.196012 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:34.196015 | orchestrator | 2026-03-01 01:07:34.196018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:07:34.196022 | orchestrator | Sunday 01 March 2026 01:06:26 +0000 (0:00:00.316) 0:00:00.489 ********** 2026-03-01 01:07:34.196027 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-01 01:07:34.196033 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-01 01:07:34.196039 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-01 01:07:34.196043 | orchestrator | 2026-03-01 01:07:34.196049 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-01 01:07:34.196053 | orchestrator | 2026-03-01 01:07:34.196056 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-01 01:07:34.196059 | orchestrator | Sunday 01 March 2026 01:06:27 +0000 (0:00:00.628) 0:00:01.117 ********** 2026-03-01 01:07:34.196062 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:34.196065 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:34.196068 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:34.196071 | orchestrator | 2026-03-01 01:07:34.196074 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:07:34.196078 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:07:34.196081 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:07:34.196084 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:07:34.196087 | orchestrator | 2026-03-01 01:07:34.196145 | orchestrator | 2026-03-01 01:07:34.196148 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:07:34.196151 | orchestrator | Sunday 01 March 2026 01:06:27 +0000 (0:00:00.654) 0:00:01.772 ********** 2026-03-01 01:07:34.196164 | orchestrator | =============================================================================== 2026-03-01 01:07:34.196167 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.66s 2026-03-01 01:07:34.196170 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-03-01 01:07:34.196173 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-01 01:07:34.196177 | orchestrator | 2026-03-01 01:07:34.196182 | orchestrator | 2026-03-01 01:07:34.196187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:07:34.196193 | orchestrator | 2026-03-01 01:07:34.196210 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:07:34.196216 | orchestrator | Sunday 01 March 2026 01:05:44 +0000 (0:00:00.408) 0:00:00.408 ********** 2026-03-01 01:07:34.196221 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:34.196227 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:34.196233 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:34.196239 | orchestrator | 2026-03-01 01:07:34.196244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:07:34.196249 | orchestrator | Sunday 01 March 2026 01:05:45 +0000 (0:00:00.460) 0:00:00.871 ********** 2026-03-01 01:07:34.196252 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-01 01:07:34.196255 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-01 01:07:34.196259 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-01 01:07:34.196262 | orchestrator | 2026-03-01 01:07:34.196265 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-01 01:07:34.196268 | orchestrator | 2026-03-01 01:07:34.196271 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-01 01:07:34.196280 | orchestrator | Sunday 01 March 2026 01:05:45 +0000 (0:00:00.511) 0:00:01.383 ********** 2026-03-01 01:07:34.196284 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:07:34.196287 | orchestrator | 2026-03-01 01:07:34.196290 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-01 01:07:34.196297 | orchestrator | Sunday 01 March 2026 01:05:46 +0000 (0:00:01.077) 0:00:02.461 ********** 2026-03-01 01:07:34.196300 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-01 01:07:34.196303 | orchestrator | 2026-03-01 01:07:34.196306 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-01 01:07:34.196309 | orchestrator | Sunday 01 March 2026 01:05:51 +0000 (0:00:04.291) 0:00:06.752 ********** 2026-03-01 01:07:34.196312 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-01 01:07:34.196316 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-01 01:07:34.196319 | orchestrator | 2026-03-01 01:07:34.196322 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-01 01:07:34.196325 | orchestrator | Sunday 01 March 2026 01:05:57 +0000 (0:00:06.532) 0:00:13.284 ********** 2026-03-01 01:07:34.196328 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:07:34.196331 | orchestrator | 2026-03-01 01:07:34.196334 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-01 01:07:34.196337 | orchestrator | Sunday 01 March 2026 01:06:00 +0000 (0:00:03.028) 0:00:16.313 ********** 2026-03-01 01:07:34.196341 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-01 01:07:34.196344 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:07:34.196347 | orchestrator | 2026-03-01 01:07:34.196350 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-01 01:07:34.196353 | orchestrator | Sunday 01 March 2026 01:06:03 +0000 (0:00:03.290) 0:00:19.603 ********** 2026-03-01 01:07:34.196356 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:07:34.196363 | orchestrator | 2026-03-01 01:07:34.196366 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-01 01:07:34.196369 | orchestrator | Sunday 01 March 2026 01:06:07 +0000 (0:00:03.112) 0:00:22.716 ********** 2026-03-01 01:07:34.196372 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-01 01:07:34.196375 | orchestrator | 2026-03-01 01:07:34.196378 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-01 01:07:34.196381 | orchestrator | Sunday 01 March 2026 01:06:10 +0000 (0:00:03.663) 0:00:26.380 ********** 2026-03-01 01:07:34.196384 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:34.196387 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:34.196391 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:34.196394 | orchestrator | 2026-03-01 01:07:34.196397 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-01 01:07:34.196400 | orchestrator | Sunday 01 March 2026 01:06:10 +0000 (0:00:00.268) 0:00:26.648 ********** 2026-03-01 01:07:34.196408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196423 | orchestrator | 2026-03-01 01:07:34.196426 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-01 01:07:34.196430 | orchestrator | Sunday 01 March 2026 01:06:11 +0000 (0:00:00.753) 0:00:27.402 ********** 2026-03-01 01:07:34.196435 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:34.196438 | orchestrator | 2026-03-01 01:07:34.196441 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-01 01:07:34.196445 | orchestrator | Sunday 01 March 2026 01:06:11 +0000 (0:00:00.135) 0:00:27.537 ********** 2026-03-01 01:07:34.196448 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:34.196451 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:34.196454 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:34.196457 | orchestrator | 2026-03-01 01:07:34.196461 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-01 01:07:34.196466 | orchestrator | Sunday 01 March 2026 01:06:12 +0000 (0:00:00.371) 0:00:27.909 ********** 2026-03-01 01:07:34.196471 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:07:34.196476 | orchestrator | 2026-03-01 01:07:34.196482 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-01 01:07:34.196487 | orchestrator | Sunday 01 March 2026 01:06:12 +0000 (0:00:00.485) 0:00:28.394 ********** 2026-03-01 01:07:34.196493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196509 | orchestrator | 2026-03-01 01:07:34.196512 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-01 01:07:34.196518 | orchestrator | Sunday 01 March 2026 01:06:13 +0000 (0:00:01.254) 0:00:29.649 ********** 2026-03-01 01:07:34.196521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196525 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:34.196528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196531 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:34.196536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196539 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:34.196542 | orchestrator | 2026-03-01 01:07:34.196547 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-01 01:07:34.196551 | orchestrator | Sunday 01 March 2026 01:06:14 +0000 (0:00:00.597) 0:00:30.246 ********** 2026-03-01 01:07:34.196554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196559 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:34.196562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196566 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:34.196569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196581 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:34.196590 | orchestrator | 2026-03-01 01:07:34.196595 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-01 01:07:34.196601 | orchestrator | Sunday 01 March 2026 01:06:15 +0000 (0:00:00.614) 0:00:30.861 ********** 2026-03-01 01:07:34.196609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196633 | orchestrator | 2026-03-01 01:07:34.196638 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-01 01:07:34.196643 | orchestrator | Sunday 01 March 2026 01:06:16 +0000 (0:00:01.217) 0:00:32.079 ********** 2026-03-01 01:07:34.196648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196675 | orchestrator | 2026-03-01 01:07:34.196681 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-01 01:07:34.196686 | orchestrator | Sunday 01 March 2026 01:06:18 +0000 (0:00:02.204) 0:00:34.284 ********** 2026-03-01 01:07:34.196691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-01 01:07:34.196694 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-01 01:07:34.196697 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-01 01:07:34.196700 | orchestrator | 2026-03-01 01:07:34.196703 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-01 01:07:34.196706 | orchestrator | Sunday 01 March 2026 01:06:19 +0000 (0:00:01.340) 0:00:35.625 ********** 2026-03-01 01:07:34.196710 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:34.196713 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:07:34.196716 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:07:34.196719 | orchestrator | 2026-03-01 01:07:34.196722 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-01 01:07:34.196725 | orchestrator | Sunday 01 March 2026 01:06:21 +0000 (0:00:01.499) 0:00:37.124 ********** 2026-03-01 01:07:34.196728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196732 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:34.196735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196803 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:34.196817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-01 01:07:34.196824 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:34.196828 | orchestrator | 2026-03-01 01:07:34.196832 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-01 01:07:34.196837 | orchestrator | Sunday 01 March 2026 01:06:21 +0000 (0:00:00.524) 0:00:37.649 ********** 2026-03-01 01:07:34.196842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-01 01:07:34.196856 | orchestrator | 2026-03-01 01:07:34.196860 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-01 01:07:34.196863 | orchestrator | Sunday 01 March 2026 01:06:23 +0000 (0:00:01.416) 0:00:39.065 ********** 2026-03-01 01:07:34.196866 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:34.196869 | orchestrator | 2026-03-01 01:07:34.196872 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-01 01:07:34.196875 | orchestrator | Sunday 01 March 2026 01:06:26 +0000 (0:00:03.002) 0:00:42.067 ********** 2026-03-01 01:07:34.196878 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:34.196884 | orchestrator | 2026-03-01 01:07:34.196887 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-01 01:07:34.196890 | orchestrator | Sunday 01 March 2026 01:06:28 +0000 (0:00:02.276) 0:00:44.344 ********** 2026-03-01 01:07:34.196893 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:34.196896 | orchestrator | 2026-03-01 01:07:34.196900 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-01 01:07:34.196905 | orchestrator | Sunday 01 March 2026 01:06:42 +0000 (0:00:13.656) 0:00:58.001 ********** 2026-03-01 01:07:34.196908 | orchestrator | 2026-03-01 01:07:34.196911 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-01 01:07:34.196914 | orchestrator | Sunday 01 March 2026 01:06:42 +0000 (0:00:00.066) 0:00:58.067 ********** 2026-03-01 01:07:34.196917 | orchestrator | 2026-03-01 01:07:34.196921 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-01 01:07:34.196924 | orchestrator | Sunday 01 March 2026 01:06:42 +0000 (0:00:00.069) 0:00:58.137 ********** 2026-03-01 01:07:34.196927 | orchestrator | 2026-03-01 01:07:34.196932 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-01 01:07:34.196935 | orchestrator | Sunday 01 March 2026 01:06:42 +0000 (0:00:00.076) 0:00:58.213 ********** 2026-03-01 01:07:34.196938 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:34.196941 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:07:34.196945 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:07:34.196948 | orchestrator | 2026-03-01 01:07:34.196951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:07:34.196954 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-01 01:07:34.196958 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:07:34.196961 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:07:34.196966 | orchestrator | 2026-03-01 01:07:34.196971 | orchestrator | 2026-03-01 01:07:34.196976 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:07:34.196981 | orchestrator | Sunday 01 March 2026 01:07:30 +0000 (0:00:48.351) 0:01:46.565 ********** 2026-03-01 01:07:34.196986 | orchestrator | =============================================================================== 2026-03-01 01:07:34.196991 | orchestrator | placement : Restart placement-api container ---------------------------- 48.35s 2026-03-01 01:07:34.196996 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.66s 2026-03-01 01:07:34.197002 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.53s 2026-03-01 01:07:34.197006 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.29s 2026-03-01 01:07:34.197009 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.66s 2026-03-01 01:07:34.197012 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.29s 2026-03-01 01:07:34.197015 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.11s 2026-03-01 01:07:34.197018 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.03s 2026-03-01 01:07:34.197021 | orchestrator | placement : Creating placement databases -------------------------------- 3.00s 2026-03-01 01:07:34.197024 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.28s 2026-03-01 01:07:34.197027 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.20s 2026-03-01 01:07:34.197030 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.50s 2026-03-01 01:07:34.197033 | orchestrator | placement : Check placement containers ---------------------------------- 1.42s 2026-03-01 01:07:34.197036 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.34s 2026-03-01 01:07:34.197042 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.25s 2026-03-01 01:07:34.197046 | orchestrator | placement : Copying over config.json files for services ----------------- 1.22s 2026-03-01 01:07:34.197049 | orchestrator | placement : include_tasks ----------------------------------------------- 1.08s 2026-03-01 01:07:34.197052 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.75s 2026-03-01 01:07:34.197055 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2026-03-01 01:07:34.197058 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.60s 2026-03-01 01:07:34.197061 | orchestrator | 2026-03-01 01:07:34 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:34.197065 | orchestrator | 2026-03-01 01:07:34 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:34.199771 | orchestrator | 2026-03-01 01:07:34 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:34.199819 | orchestrator | 2026-03-01 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:37.240329 | orchestrator | 2026-03-01 01:07:37 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:37.241186 | orchestrator | 2026-03-01 01:07:37 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:37.242754 | orchestrator | 2026-03-01 01:07:37 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:37.243969 | orchestrator | 2026-03-01 01:07:37 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:37.244013 | orchestrator | 2026-03-01 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:40.284686 | orchestrator | 2026-03-01 01:07:40 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:40.286497 | orchestrator | 2026-03-01 01:07:40 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:40.290620 | orchestrator | 2026-03-01 01:07:40 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:40.291653 | orchestrator | 2026-03-01 01:07:40 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:40.291725 | orchestrator | 2026-03-01 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:43.346970 | orchestrator | 2026-03-01 01:07:43 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:43.347030 | orchestrator | 2026-03-01 01:07:43 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:43.347160 | orchestrator | 2026-03-01 01:07:43 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:43.347779 | orchestrator | 2026-03-01 01:07:43 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:43.347829 | orchestrator | 2026-03-01 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:46.386351 | orchestrator | 2026-03-01 01:07:46 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:46.386398 | orchestrator | 2026-03-01 01:07:46 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:46.387121 | orchestrator | 2026-03-01 01:07:46 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:46.388095 | orchestrator | 2026-03-01 01:07:46 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:46.388282 | orchestrator | 2026-03-01 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:49.417160 | orchestrator | 2026-03-01 01:07:49 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:49.417890 | orchestrator | 2026-03-01 01:07:49 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:49.418992 | orchestrator | 2026-03-01 01:07:49 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:49.419995 | orchestrator | 2026-03-01 01:07:49 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state STARTED 2026-03-01 01:07:49.420021 | orchestrator | 2026-03-01 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:52.449280 | orchestrator | 2026-03-01 01:07:52 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:52.451025 | orchestrator | 2026-03-01 01:07:52 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:52.453236 | orchestrator | 2026-03-01 01:07:52 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:52.457474 | orchestrator | 2026-03-01 01:07:52 | INFO  | Task a0877522-c642-45d5-9a6a-8447111682c7 is in state SUCCESS 2026-03-01 01:07:52.458733 | orchestrator | 2026-03-01 01:07:52.458777 | orchestrator | 2026-03-01 01:07:52.458784 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:07:52.458790 | orchestrator | 2026-03-01 01:07:52.458796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:07:52.458801 | orchestrator | Sunday 01 March 2026 01:03:40 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-03-01 01:07:52.458806 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:52.458812 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:52.458817 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:52.458822 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:07:52.458827 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:07:52.458832 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:07:52.458837 | orchestrator | 2026-03-01 01:07:52.458841 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:07:52.458846 | orchestrator | Sunday 01 March 2026 01:03:41 +0000 (0:00:00.801) 0:00:01.106 ********** 2026-03-01 01:07:52.458851 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-01 01:07:52.458856 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-01 01:07:52.458861 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-01 01:07:52.458867 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-01 01:07:52.458872 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-01 01:07:52.458877 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-01 01:07:52.458883 | orchestrator | 2026-03-01 01:07:52.458888 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-01 01:07:52.458893 | orchestrator | 2026-03-01 01:07:52.458898 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-01 01:07:52.458903 | orchestrator | Sunday 01 March 2026 01:03:42 +0000 (0:00:00.704) 0:00:01.811 ********** 2026-03-01 01:07:52.458967 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:07:52.458972 | orchestrator | 2026-03-01 01:07:52.458976 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-01 01:07:52.458979 | orchestrator | Sunday 01 March 2026 01:03:43 +0000 (0:00:01.018) 0:00:02.829 ********** 2026-03-01 01:07:52.458982 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:52.458985 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:52.458989 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:52.459019 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:07:52.459022 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:07:52.459042 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:07:52.459049 | orchestrator | 2026-03-01 01:07:52.459264 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-01 01:07:52.459278 | orchestrator | Sunday 01 March 2026 01:03:44 +0000 (0:00:01.182) 0:00:04.012 ********** 2026-03-01 01:07:52.459283 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:52.459289 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:52.459294 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:52.459300 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:07:52.459305 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:07:52.459310 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:07:52.459316 | orchestrator | 2026-03-01 01:07:52.459321 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-01 01:07:52.459327 | orchestrator | Sunday 01 March 2026 01:03:45 +0000 (0:00:01.020) 0:00:05.033 ********** 2026-03-01 01:07:52.459332 | orchestrator | ok: [testbed-node-0] => { 2026-03-01 01:07:52.459339 | orchestrator |  "changed": false, 2026-03-01 01:07:52.459344 | orchestrator |  "msg": "All assertions passed" 2026-03-01 01:07:52.459350 | orchestrator | } 2026-03-01 01:07:52.459355 | orchestrator | ok: [testbed-node-1] => { 2026-03-01 01:07:52.459361 | orchestrator |  "changed": false, 2026-03-01 01:07:52.459367 | orchestrator |  "msg": "All assertions passed" 2026-03-01 01:07:52.459372 | orchestrator | } 2026-03-01 01:07:52.459378 | orchestrator | ok: [testbed-node-2] => { 2026-03-01 01:07:52.459383 | orchestrator |  "changed": false, 2026-03-01 01:07:52.459389 | orchestrator |  "msg": "All assertions passed" 2026-03-01 01:07:52.459394 | orchestrator | } 2026-03-01 01:07:52.459399 | orchestrator | ok: [testbed-node-3] => { 2026-03-01 01:07:52.459405 | orchestrator |  "changed": false, 2026-03-01 01:07:52.459410 | orchestrator |  "msg": "All assertions passed" 2026-03-01 01:07:52.459416 | orchestrator | } 2026-03-01 01:07:52.459421 | orchestrator | ok: [testbed-node-4] => { 2026-03-01 01:07:52.459427 | orchestrator |  "changed": false, 2026-03-01 01:07:52.459432 | orchestrator |  "msg": "All assertions passed" 2026-03-01 01:07:52.459438 | orchestrator | } 2026-03-01 01:07:52.459443 | orchestrator | ok: [testbed-node-5] => { 2026-03-01 01:07:52.459449 | orchestrator |  "changed": false, 2026-03-01 01:07:52.459454 | orchestrator |  "msg": "All assertions passed" 2026-03-01 01:07:52.459460 | orchestrator | } 2026-03-01 01:07:52.459465 | orchestrator | 2026-03-01 01:07:52.459471 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-01 01:07:52.459476 | orchestrator | Sunday 01 March 2026 01:03:46 +0000 (0:00:00.648) 0:00:05.682 ********** 2026-03-01 01:07:52.459482 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.459487 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.459492 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.459497 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.459502 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.459507 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.459512 | orchestrator | 2026-03-01 01:07:52.459518 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-01 01:07:52.459523 | orchestrator | Sunday 01 March 2026 01:03:46 +0000 (0:00:00.538) 0:00:06.220 ********** 2026-03-01 01:07:52.459529 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-01 01:07:52.459534 | orchestrator | 2026-03-01 01:07:52.459539 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-01 01:07:52.459544 | orchestrator | Sunday 01 March 2026 01:03:49 +0000 (0:00:02.952) 0:00:09.173 ********** 2026-03-01 01:07:52.459550 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-01 01:07:52.459636 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-01 01:07:52.459643 | orchestrator | 2026-03-01 01:07:52.459667 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-01 01:07:52.459673 | orchestrator | Sunday 01 March 2026 01:03:55 +0000 (0:00:05.647) 0:00:14.820 ********** 2026-03-01 01:07:52.459687 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:07:52.459692 | orchestrator | 2026-03-01 01:07:52.459698 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-01 01:07:52.459703 | orchestrator | Sunday 01 March 2026 01:03:58 +0000 (0:00:02.804) 0:00:17.625 ********** 2026-03-01 01:07:52.459709 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-01 01:07:52.459715 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:07:52.459720 | orchestrator | 2026-03-01 01:07:52.459726 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-01 01:07:52.459731 | orchestrator | Sunday 01 March 2026 01:04:02 +0000 (0:00:03.786) 0:00:21.411 ********** 2026-03-01 01:07:52.459737 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:07:52.459742 | orchestrator | 2026-03-01 01:07:52.459748 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-01 01:07:52.459753 | orchestrator | Sunday 01 March 2026 01:04:05 +0000 (0:00:03.484) 0:00:24.896 ********** 2026-03-01 01:07:52.459759 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-01 01:07:52.459764 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-01 01:07:52.459769 | orchestrator | 2026-03-01 01:07:52.459774 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-01 01:07:52.459780 | orchestrator | Sunday 01 March 2026 01:04:12 +0000 (0:00:06.880) 0:00:31.776 ********** 2026-03-01 01:07:52.459785 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.459790 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.459795 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.459801 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.459811 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.459817 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.459822 | orchestrator | 2026-03-01 01:07:52.459828 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-01 01:07:52.459833 | orchestrator | Sunday 01 March 2026 01:04:13 +0000 (0:00:00.621) 0:00:32.398 ********** 2026-03-01 01:07:52.459839 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.459844 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.459849 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.459855 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.459860 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.459866 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.459871 | orchestrator | 2026-03-01 01:07:52.459877 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-01 01:07:52.459882 | orchestrator | Sunday 01 March 2026 01:04:15 +0000 (0:00:02.117) 0:00:34.515 ********** 2026-03-01 01:07:52.459888 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:07:52.459893 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:07:52.459899 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:07:52.459904 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:07:52.459910 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:07:52.459915 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:07:52.459920 | orchestrator | 2026-03-01 01:07:52.459926 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-01 01:07:52.459931 | orchestrator | Sunday 01 March 2026 01:04:16 +0000 (0:00:01.127) 0:00:35.643 ********** 2026-03-01 01:07:52.459937 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.459942 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.459948 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.459953 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.459958 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.459963 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.459969 | orchestrator | 2026-03-01 01:07:52.459974 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-01 01:07:52.459983 | orchestrator | Sunday 01 March 2026 01:04:18 +0000 (0:00:02.648) 0:00:38.292 ********** 2026-03-01 01:07:52.459991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460045 | orchestrator | 2026-03-01 01:07:52.460051 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-01 01:07:52.460056 | orchestrator | Sunday 01 March 2026 01:04:22 +0000 (0:00:03.391) 0:00:41.684 ********** 2026-03-01 01:07:52.460105 | orchestrator | [WARNING]: Skipped 2026-03-01 01:07:52.460111 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-01 01:07:52.460116 | orchestrator | due to this access issue: 2026-03-01 01:07:52.460121 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-01 01:07:52.460126 | orchestrator | a directory 2026-03-01 01:07:52.460132 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:07:52.460137 | orchestrator | 2026-03-01 01:07:52.460142 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-01 01:07:52.460159 | orchestrator | Sunday 01 March 2026 01:04:23 +0000 (0:00:00.858) 0:00:42.542 ********** 2026-03-01 01:07:52.460165 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:07:52.460171 | orchestrator | 2026-03-01 01:07:52.460176 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-01 01:07:52.460182 | orchestrator | Sunday 01 March 2026 01:04:24 +0000 (0:00:01.305) 0:00:43.847 ********** 2026-03-01 01:07:52.460187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460237 | orchestrator | 2026-03-01 01:07:52.460242 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-01 01:07:52.460249 | orchestrator | Sunday 01 March 2026 01:04:28 +0000 (0:00:03.722) 0:00:47.570 ********** 2026-03-01 01:07:52.460255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460263 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460274 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460286 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460322 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460327 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460339 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460344 | orchestrator | 2026-03-01 01:07:52.460350 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-01 01:07:52.460355 | orchestrator | Sunday 01 March 2026 01:04:31 +0000 (0:00:03.266) 0:00:50.837 ********** 2026-03-01 01:07:52.460360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460367 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460382 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460393 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460405 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460412 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460420 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460424 | orchestrator | 2026-03-01 01:07:52.460428 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-01 01:07:52.460431 | orchestrator | Sunday 01 March 2026 01:04:34 +0000 (0:00:03.195) 0:00:54.032 ********** 2026-03-01 01:07:52.460435 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460439 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460443 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460447 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460450 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460453 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460456 | orchestrator | 2026-03-01 01:07:52.460460 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-01 01:07:52.460466 | orchestrator | Sunday 01 March 2026 01:04:37 +0000 (0:00:02.792) 0:00:56.825 ********** 2026-03-01 01:07:52.460469 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460472 | orchestrator | 2026-03-01 01:07:52.460476 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-01 01:07:52.460479 | orchestrator | Sunday 01 March 2026 01:04:37 +0000 (0:00:00.139) 0:00:56.964 ********** 2026-03-01 01:07:52.460482 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460485 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460488 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460493 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460496 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460499 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460502 | orchestrator | 2026-03-01 01:07:52.460505 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-01 01:07:52.460509 | orchestrator | Sunday 01 March 2026 01:04:38 +0000 (0:00:00.621) 0:00:57.586 ********** 2026-03-01 01:07:52.460513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460517 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460524 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460530 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460541 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460548 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460556 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460559 | orchestrator | 2026-03-01 01:07:52.460562 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-01 01:07:52.460565 | orchestrator | Sunday 01 March 2026 01:04:41 +0000 (0:00:02.892) 0:01:00.478 ********** 2026-03-01 01:07:52.460569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460665 | orchestrator | 2026-03-01 01:07:52.460668 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-01 01:07:52.460671 | orchestrator | Sunday 01 March 2026 01:04:45 +0000 (0:00:04.595) 0:01:05.074 ********** 2026-03-01 01:07:52.460674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.460704 | orchestrator | 2026-03-01 01:07:52.460707 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-01 01:07:52.460710 | orchestrator | Sunday 01 March 2026 01:04:52 +0000 (0:00:06.680) 0:01:11.754 ********** 2026-03-01 01:07:52.460717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460720 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460728 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.460735 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460744 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460751 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460759 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460762 | orchestrator | 2026-03-01 01:07:52.460766 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-01 01:07:52.460769 | orchestrator | Sunday 01 March 2026 01:04:55 +0000 (0:00:02.677) 0:01:14.432 ********** 2026-03-01 01:07:52.460772 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460775 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460778 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:52.460782 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:07:52.460785 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460791 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:07:52.460796 | orchestrator | 2026-03-01 01:07:52.460804 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-01 01:07:52.460813 | orchestrator | Sunday 01 March 2026 01:04:57 +0000 (0:00:02.891) 0:01:17.323 ********** 2026-03-01 01:07:52.460821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460826 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460841 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.460851 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.460880 | orchestrator | 2026-03-01 01:07:52.460885 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-01 01:07:52.460958 | orchestrator | Sunday 01 March 2026 01:05:02 +0000 (0:00:04.363) 0:01:21.687 ********** 2026-03-01 01:07:52.460968 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.460974 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.460977 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.460980 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.460984 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.460987 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.460990 | orchestrator | 2026-03-01 01:07:52.460993 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-01 01:07:52.460996 | orchestrator | Sunday 01 March 2026 01:05:04 +0000 (0:00:02.278) 0:01:23.966 ********** 2026-03-01 01:07:52.460999 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461002 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461005 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461008 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461011 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461014 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461017 | orchestrator | 2026-03-01 01:07:52.461020 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-01 01:07:52.461024 | orchestrator | Sunday 01 March 2026 01:05:06 +0000 (0:00:02.167) 0:01:26.133 ********** 2026-03-01 01:07:52.461027 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461030 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461033 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461036 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461039 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461042 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461045 | orchestrator | 2026-03-01 01:07:52.461049 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-01 01:07:52.461052 | orchestrator | Sunday 01 March 2026 01:05:08 +0000 (0:00:01.937) 0:01:28.071 ********** 2026-03-01 01:07:52.461055 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461076 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461080 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461083 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461086 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461089 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461092 | orchestrator | 2026-03-01 01:07:52.461096 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-01 01:07:52.461099 | orchestrator | Sunday 01 March 2026 01:05:10 +0000 (0:00:01.878) 0:01:29.949 ********** 2026-03-01 01:07:52.461102 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461105 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461108 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461112 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461118 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461121 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461125 | orchestrator | 2026-03-01 01:07:52.461128 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-01 01:07:52.461131 | orchestrator | Sunday 01 March 2026 01:05:13 +0000 (0:00:02.698) 0:01:32.647 ********** 2026-03-01 01:07:52.461134 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461137 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461140 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461143 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461147 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461150 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461153 | orchestrator | 2026-03-01 01:07:52.461156 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-01 01:07:52.461159 | orchestrator | Sunday 01 March 2026 01:05:15 +0000 (0:00:01.903) 0:01:34.550 ********** 2026-03-01 01:07:52.461162 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-01 01:07:52.461170 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461173 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-01 01:07:52.461176 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461179 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-01 01:07:52.461182 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461185 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-01 01:07:52.461189 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461192 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-01 01:07:52.461197 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461200 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-01 01:07:52.461203 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461206 | orchestrator | 2026-03-01 01:07:52.461210 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-01 01:07:52.461213 | orchestrator | Sunday 01 March 2026 01:05:17 +0000 (0:00:01.926) 0:01:36.477 ********** 2026-03-01 01:07:52.461216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461220 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461226 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461238 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461245 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461253 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461260 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461263 | orchestrator | 2026-03-01 01:07:52.461266 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-01 01:07:52.461270 | orchestrator | Sunday 01 March 2026 01:05:20 +0000 (0:00:03.059) 0:01:39.536 ********** 2026-03-01 01:07:52.461273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461277 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461289 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461301 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461305 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461311 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461320 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461323 | orchestrator | 2026-03-01 01:07:52.461327 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-01 01:07:52.461330 | orchestrator | Sunday 01 March 2026 01:05:22 +0000 (0:00:02.150) 0:01:41.686 ********** 2026-03-01 01:07:52.461333 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461338 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461341 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461344 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461348 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461351 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461355 | orchestrator | 2026-03-01 01:07:52.461358 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-01 01:07:52.461361 | orchestrator | Sunday 01 March 2026 01:05:24 +0000 (0:00:01.768) 0:01:43.455 ********** 2026-03-01 01:07:52.461365 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461368 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461371 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461375 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:07:52.461378 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:07:52.461381 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:07:52.461384 | orchestrator | 2026-03-01 01:07:52.461387 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-01 01:07:52.461391 | orchestrator | Sunday 01 March 2026 01:05:28 +0000 (0:00:04.778) 0:01:48.234 ********** 2026-03-01 01:07:52.461394 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461397 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461400 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461403 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461406 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461410 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461413 | orchestrator | 2026-03-01 01:07:52.461416 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-01 01:07:52.461419 | orchestrator | Sunday 01 March 2026 01:05:30 +0000 (0:00:01.982) 0:01:50.216 ********** 2026-03-01 01:07:52.461422 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461426 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461429 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461432 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461435 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461440 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461443 | orchestrator | 2026-03-01 01:07:52.461446 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-01 01:07:52.461449 | orchestrator | Sunday 01 March 2026 01:05:32 +0000 (0:00:01.795) 0:01:52.012 ********** 2026-03-01 01:07:52.461453 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461456 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461459 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461462 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461465 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461469 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461472 | orchestrator | 2026-03-01 01:07:52.461475 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-01 01:07:52.461478 | orchestrator | Sunday 01 March 2026 01:05:34 +0000 (0:00:01.660) 0:01:53.672 ********** 2026-03-01 01:07:52.461481 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461484 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461487 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461491 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461496 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461500 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461503 | orchestrator | 2026-03-01 01:07:52.461506 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-01 01:07:52.461509 | orchestrator | Sunday 01 March 2026 01:05:36 +0000 (0:00:01.810) 0:01:55.483 ********** 2026-03-01 01:07:52.461512 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461516 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461519 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461522 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461525 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461528 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461531 | orchestrator | 2026-03-01 01:07:52.461534 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-01 01:07:52.461538 | orchestrator | Sunday 01 March 2026 01:05:38 +0000 (0:00:01.982) 0:01:57.465 ********** 2026-03-01 01:07:52.461542 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461546 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461549 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461552 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461555 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461558 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461562 | orchestrator | 2026-03-01 01:07:52.461565 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-01 01:07:52.461568 | orchestrator | Sunday 01 March 2026 01:05:41 +0000 (0:00:03.129) 0:02:00.595 ********** 2026-03-01 01:07:52.461571 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461574 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461577 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461581 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461584 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461587 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461590 | orchestrator | 2026-03-01 01:07:52.461593 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-01 01:07:52.461599 | orchestrator | Sunday 01 March 2026 01:05:43 +0000 (0:00:01.932) 0:02:02.527 ********** 2026-03-01 01:07:52.461604 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-01 01:07:52.461612 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461619 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-01 01:07:52.461624 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461629 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-01 01:07:52.461635 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461640 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-01 01:07:52.461645 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461654 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-01 01:07:52.461659 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461665 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-01 01:07:52.461671 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461677 | orchestrator | 2026-03-01 01:07:52.461683 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-01 01:07:52.461688 | orchestrator | Sunday 01 March 2026 01:05:45 +0000 (0:00:02.270) 0:02:04.798 ********** 2026-03-01 01:07:52.461694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461704 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461713 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-01 01:07:52.461722 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461730 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461742 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-01 01:07:52.461750 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461753 | orchestrator | 2026-03-01 01:07:52.461756 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-01 01:07:52.461760 | orchestrator | Sunday 01 March 2026 01:05:47 +0000 (0:00:02.531) 0:02:07.329 ********** 2026-03-01 01:07:52.461763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.461766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.461770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.461775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.461784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-01 01:07:52.461788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-01 01:07:52.461791 | orchestrator | 2026-03-01 01:07:52.461795 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-01 01:07:52.461798 | orchestrator | Sunday 01 March 2026 01:05:50 +0000 (0:00:02.698) 0:02:10.028 ********** 2026-03-01 01:07:52.461801 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:07:52.461804 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:07:52.461807 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:07:52.461810 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:07:52.461813 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:07:52.461816 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:07:52.461819 | orchestrator | 2026-03-01 01:07:52.461822 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-01 01:07:52.461826 | orchestrator | Sunday 01 March 2026 01:05:51 +0000 (0:00:00.515) 0:02:10.543 ********** 2026-03-01 01:07:52.461829 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:52.461832 | orchestrator | 2026-03-01 01:07:52.461835 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-01 01:07:52.461838 | orchestrator | Sunday 01 March 2026 01:05:53 +0000 (0:00:01.925) 0:02:12.468 ********** 2026-03-01 01:07:52.461841 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:52.461844 | orchestrator | 2026-03-01 01:07:52.461847 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-01 01:07:52.461851 | orchestrator | Sunday 01 March 2026 01:05:55 +0000 (0:00:02.369) 0:02:14.838 ********** 2026-03-01 01:07:52.461854 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:52.461857 | orchestrator | 2026-03-01 01:07:52.461860 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-01 01:07:52.461865 | orchestrator | Sunday 01 March 2026 01:06:31 +0000 (0:00:36.439) 0:02:51.278 ********** 2026-03-01 01:07:52.461869 | orchestrator | 2026-03-01 01:07:52.461872 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-01 01:07:52.461875 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.078) 0:02:51.357 ********** 2026-03-01 01:07:52.461878 | orchestrator | 2026-03-01 01:07:52.461881 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-01 01:07:52.461885 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.244) 0:02:51.601 ********** 2026-03-01 01:07:52.461888 | orchestrator | 2026-03-01 01:07:52.461891 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-01 01:07:52.461894 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.085) 0:02:51.686 ********** 2026-03-01 01:07:52.461897 | orchestrator | 2026-03-01 01:07:52.461902 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-01 01:07:52.461905 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.079) 0:02:51.766 ********** 2026-03-01 01:07:52.461908 | orchestrator | 2026-03-01 01:07:52.461912 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-01 01:07:52.461915 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.063) 0:02:51.829 ********** 2026-03-01 01:07:52.461918 | orchestrator | 2026-03-01 01:07:52.461921 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-01 01:07:52.461924 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.065) 0:02:51.895 ********** 2026-03-01 01:07:52.461927 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:07:52.461930 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:07:52.461933 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:07:52.461936 | orchestrator | 2026-03-01 01:07:52.461940 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-01 01:07:52.461943 | orchestrator | Sunday 01 March 2026 01:06:58 +0000 (0:00:25.501) 0:03:17.397 ********** 2026-03-01 01:07:52.461946 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:07:52.461949 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:07:52.461952 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:07:52.461956 | orchestrator | 2026-03-01 01:07:52.461959 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:07:52.461963 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 01:07:52.461967 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-01 01:07:52.461972 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-01 01:07:52.461975 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 01:07:52.461978 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 01:07:52.461981 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-01 01:07:52.461984 | orchestrator | 2026-03-01 01:07:52.461988 | orchestrator | 2026-03-01 01:07:52.461991 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:07:52.461994 | orchestrator | Sunday 01 March 2026 01:07:49 +0000 (0:00:51.946) 0:04:09.343 ********** 2026-03-01 01:07:52.461997 | orchestrator | =============================================================================== 2026-03-01 01:07:52.462000 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.95s 2026-03-01 01:07:52.462003 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 36.44s 2026-03-01 01:07:52.462010 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.50s 2026-03-01 01:07:52.462049 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 6.88s 2026-03-01 01:07:52.462052 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.68s 2026-03-01 01:07:52.462055 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.65s 2026-03-01 01:07:52.462088 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.78s 2026-03-01 01:07:52.462092 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.60s 2026-03-01 01:07:52.462095 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.36s 2026-03-01 01:07:52.462098 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.79s 2026-03-01 01:07:52.462102 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.72s 2026-03-01 01:07:52.462105 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.49s 2026-03-01 01:07:52.462108 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.39s 2026-03-01 01:07:52.462111 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.27s 2026-03-01 01:07:52.462114 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.20s 2026-03-01 01:07:52.462117 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.13s 2026-03-01 01:07:52.462120 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.06s 2026-03-01 01:07:52.462123 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 2.95s 2026-03-01 01:07:52.462126 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.89s 2026-03-01 01:07:52.462130 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.89s 2026-03-01 01:07:52.462133 | orchestrator | 2026-03-01 01:07:52 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:07:52.462136 | orchestrator | 2026-03-01 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:55.492273 | orchestrator | 2026-03-01 01:07:55 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:55.494807 | orchestrator | 2026-03-01 01:07:55 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:55.496215 | orchestrator | 2026-03-01 01:07:55 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:55.498094 | orchestrator | 2026-03-01 01:07:55 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:07:55.498148 | orchestrator | 2026-03-01 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:07:58.540807 | orchestrator | 2026-03-01 01:07:58 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:07:58.541428 | orchestrator | 2026-03-01 01:07:58 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:07:58.542490 | orchestrator | 2026-03-01 01:07:58 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:07:58.543383 | orchestrator | 2026-03-01 01:07:58 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:07:58.543407 | orchestrator | 2026-03-01 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:01.595448 | orchestrator | 2026-03-01 01:08:01 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:01.595655 | orchestrator | 2026-03-01 01:08:01 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:08:01.596465 | orchestrator | 2026-03-01 01:08:01 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:08:01.597366 | orchestrator | 2026-03-01 01:08:01 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:01.597387 | orchestrator | 2026-03-01 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:04.648933 | orchestrator | 2026-03-01 01:08:04 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:04.650804 | orchestrator | 2026-03-01 01:08:04 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state STARTED 2026-03-01 01:08:04.652770 | orchestrator | 2026-03-01 01:08:04 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:08:04.654298 | orchestrator | 2026-03-01 01:08:04 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:04.654641 | orchestrator | 2026-03-01 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:07.701833 | orchestrator | 2026-03-01 01:08:07 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:07.702119 | orchestrator | 2026-03-01 01:08:07 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:07.704445 | orchestrator | 2026-03-01 01:08:07 | INFO  | Task cc2a548b-8567-407b-9d4a-36cf55ebc4ca is in state SUCCESS 2026-03-01 01:08:07.706680 | orchestrator | 2026-03-01 01:08:07 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state STARTED 2026-03-01 01:08:07.708802 | orchestrator | 2026-03-01 01:08:07 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:07.709177 | orchestrator | 2026-03-01 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:10.757219 | orchestrator | 2026-03-01 01:08:10 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:10.757831 | orchestrator | 2026-03-01 01:08:10 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:10.760214 | orchestrator | 2026-03-01 01:08:10.760256 | orchestrator | 2026-03-01 01:08:10.760261 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:08:10.760265 | orchestrator | 2026-03-01 01:08:10.760269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:08:10.760273 | orchestrator | Sunday 01 March 2026 01:07:35 +0000 (0:00:00.268) 0:00:00.268 ********** 2026-03-01 01:08:10.760278 | orchestrator | ok: [testbed-manager] 2026-03-01 01:08:10.760283 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:08:10.760287 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:08:10.760290 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:08:10.760294 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:08:10.760298 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:08:10.760302 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:08:10.760306 | orchestrator | 2026-03-01 01:08:10.760310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:08:10.760314 | orchestrator | Sunday 01 March 2026 01:07:36 +0000 (0:00:00.710) 0:00:00.979 ********** 2026-03-01 01:08:10.760318 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760321 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760325 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760367 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760373 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760377 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760381 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-01 01:08:10.760384 | orchestrator | 2026-03-01 01:08:10.760388 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-01 01:08:10.760404 | orchestrator | 2026-03-01 01:08:10.760408 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-01 01:08:10.760412 | orchestrator | Sunday 01 March 2026 01:07:36 +0000 (0:00:00.624) 0:00:01.604 ********** 2026-03-01 01:08:10.760416 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:08:10.760421 | orchestrator | 2026-03-01 01:08:10.760424 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-01 01:08:10.760428 | orchestrator | Sunday 01 March 2026 01:07:37 +0000 (0:00:01.265) 0:00:02.870 ********** 2026-03-01 01:08:10.760432 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-01 01:08:10.760435 | orchestrator | 2026-03-01 01:08:10.760439 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-01 01:08:10.760443 | orchestrator | Sunday 01 March 2026 01:07:41 +0000 (0:00:03.174) 0:00:06.044 ********** 2026-03-01 01:08:10.760447 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-01 01:08:10.760452 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-01 01:08:10.760455 | orchestrator | 2026-03-01 01:08:10.760459 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-01 01:08:10.760470 | orchestrator | Sunday 01 March 2026 01:07:47 +0000 (0:00:06.506) 0:00:12.551 ********** 2026-03-01 01:08:10.760474 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-01 01:08:10.760477 | orchestrator | 2026-03-01 01:08:10.760481 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-01 01:08:10.760485 | orchestrator | Sunday 01 March 2026 01:07:50 +0000 (0:00:03.098) 0:00:15.650 ********** 2026-03-01 01:08:10.760584 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-01 01:08:10.760590 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:08:10.760594 | orchestrator | 2026-03-01 01:08:10.760597 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-01 01:08:10.760601 | orchestrator | Sunday 01 March 2026 01:07:54 +0000 (0:00:03.894) 0:00:19.544 ********** 2026-03-01 01:08:10.760605 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-01 01:08:10.760609 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-01 01:08:10.760613 | orchestrator | 2026-03-01 01:08:10.760616 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-01 01:08:10.760620 | orchestrator | Sunday 01 March 2026 01:08:00 +0000 (0:00:05.903) 0:00:25.448 ********** 2026-03-01 01:08:10.760624 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-01 01:08:10.760628 | orchestrator | 2026-03-01 01:08:10.760631 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:08:10.760635 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760639 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760643 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760647 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760653 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760666 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760682 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:08:10.760691 | orchestrator | 2026-03-01 01:08:10.760698 | orchestrator | 2026-03-01 01:08:10.760704 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:08:10.760711 | orchestrator | Sunday 01 March 2026 01:08:05 +0000 (0:00:04.824) 0:00:30.273 ********** 2026-03-01 01:08:10.760717 | orchestrator | =============================================================================== 2026-03-01 01:08:10.760723 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.51s 2026-03-01 01:08:10.760729 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.90s 2026-03-01 01:08:10.760736 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.82s 2026-03-01 01:08:10.760743 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.90s 2026-03-01 01:08:10.760749 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.17s 2026-03-01 01:08:10.760755 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.10s 2026-03-01 01:08:10.760763 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.27s 2026-03-01 01:08:10.760769 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-03-01 01:08:10.760776 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-01 01:08:10.760782 | orchestrator | 2026-03-01 01:08:10.760789 | orchestrator | 2026-03-01 01:08:10.760795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:08:10.760802 | orchestrator | 2026-03-01 01:08:10.760808 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:08:10.760814 | orchestrator | Sunday 01 March 2026 01:06:24 +0000 (0:00:00.252) 0:00:00.252 ********** 2026-03-01 01:08:10.760821 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:08:10.760828 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:08:10.760835 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:08:10.760841 | orchestrator | 2026-03-01 01:08:10.760848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:08:10.760854 | orchestrator | Sunday 01 March 2026 01:06:24 +0000 (0:00:00.297) 0:00:00.550 ********** 2026-03-01 01:08:10.760861 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-01 01:08:10.760868 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-01 01:08:10.760874 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-01 01:08:10.760882 | orchestrator | 2026-03-01 01:08:10.760886 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-01 01:08:10.760890 | orchestrator | 2026-03-01 01:08:10.760894 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-01 01:08:10.760897 | orchestrator | Sunday 01 March 2026 01:06:25 +0000 (0:00:00.440) 0:00:00.991 ********** 2026-03-01 01:08:10.760901 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:08:10.760905 | orchestrator | 2026-03-01 01:08:10.760909 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-01 01:08:10.760916 | orchestrator | Sunday 01 March 2026 01:06:26 +0000 (0:00:00.584) 0:00:01.575 ********** 2026-03-01 01:08:10.760920 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-01 01:08:10.760924 | orchestrator | 2026-03-01 01:08:10.760929 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-01 01:08:10.760935 | orchestrator | Sunday 01 March 2026 01:06:29 +0000 (0:00:03.318) 0:00:04.894 ********** 2026-03-01 01:08:10.760941 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-01 01:08:10.760947 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-01 01:08:10.760958 | orchestrator | 2026-03-01 01:08:10.760965 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-01 01:08:10.760971 | orchestrator | Sunday 01 March 2026 01:06:35 +0000 (0:00:06.603) 0:00:11.497 ********** 2026-03-01 01:08:10.760977 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:08:10.760983 | orchestrator | 2026-03-01 01:08:10.760989 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-01 01:08:10.760995 | orchestrator | Sunday 01 March 2026 01:06:39 +0000 (0:00:03.955) 0:00:15.453 ********** 2026-03-01 01:08:10.761001 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-01 01:08:10.761007 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:08:10.761014 | orchestrator | 2026-03-01 01:08:10.761020 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-01 01:08:10.761040 | orchestrator | Sunday 01 March 2026 01:06:43 +0000 (0:00:03.888) 0:00:19.341 ********** 2026-03-01 01:08:10.761048 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:08:10.761054 | orchestrator | 2026-03-01 01:08:10.761061 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-01 01:08:10.761067 | orchestrator | Sunday 01 March 2026 01:06:47 +0000 (0:00:03.905) 0:00:23.247 ********** 2026-03-01 01:08:10.761074 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-01 01:08:10.761078 | orchestrator | 2026-03-01 01:08:10.761082 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-01 01:08:10.761086 | orchestrator | Sunday 01 March 2026 01:06:50 +0000 (0:00:03.245) 0:00:26.493 ********** 2026-03-01 01:08:10.761089 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.761093 | orchestrator | 2026-03-01 01:08:10.761097 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-01 01:08:10.761108 | orchestrator | Sunday 01 March 2026 01:06:53 +0000 (0:00:02.976) 0:00:29.469 ********** 2026-03-01 01:08:10.761112 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.761115 | orchestrator | 2026-03-01 01:08:10.761119 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-01 01:08:10.761123 | orchestrator | Sunday 01 March 2026 01:06:57 +0000 (0:00:03.262) 0:00:32.732 ********** 2026-03-01 01:08:10.761127 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.761130 | orchestrator | 2026-03-01 01:08:10.761134 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-01 01:08:10.761138 | orchestrator | Sunday 01 March 2026 01:07:01 +0000 (0:00:04.182) 0:00:36.915 ********** 2026-03-01 01:08:10.761144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761183 | orchestrator | 2026-03-01 01:08:10.761188 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-01 01:08:10.761192 | orchestrator | Sunday 01 March 2026 01:07:04 +0000 (0:00:02.788) 0:00:39.704 ********** 2026-03-01 01:08:10.761195 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:08:10.761199 | orchestrator | 2026-03-01 01:08:10.761203 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-01 01:08:10.761207 | orchestrator | Sunday 01 March 2026 01:07:04 +0000 (0:00:00.258) 0:00:39.962 ********** 2026-03-01 01:08:10.761218 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:08:10.761222 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:08:10.761226 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:08:10.761230 | orchestrator | 2026-03-01 01:08:10.761234 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-01 01:08:10.761237 | orchestrator | Sunday 01 March 2026 01:07:05 +0000 (0:00:00.977) 0:00:40.940 ********** 2026-03-01 01:08:10.761241 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:08:10.761245 | orchestrator | 2026-03-01 01:08:10.761249 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-01 01:08:10.761258 | orchestrator | Sunday 01 March 2026 01:07:07 +0000 (0:00:01.724) 0:00:42.665 ********** 2026-03-01 01:08:10.761264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761297 | orchestrator | 2026-03-01 01:08:10.761300 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-01 01:08:10.761304 | orchestrator | Sunday 01 March 2026 01:07:10 +0000 (0:00:03.547) 0:00:46.212 ********** 2026-03-01 01:08:10.761308 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:08:10.761312 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:08:10.761316 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:08:10.761319 | orchestrator | 2026-03-01 01:08:10.761323 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-01 01:08:10.761333 | orchestrator | Sunday 01 March 2026 01:07:11 +0000 (0:00:00.694) 0:00:46.906 ********** 2026-03-01 01:08:10.761341 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:08:10.761345 | orchestrator | 2026-03-01 01:08:10.761349 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-01 01:08:10.761353 | orchestrator | Sunday 01 March 2026 01:07:12 +0000 (0:00:00.737) 0:00:47.643 ********** 2026-03-01 01:08:10.761361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761395 | orchestrator | 2026-03-01 01:08:10.761398 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-01 01:08:10.761402 | orchestrator | Sunday 01 March 2026 01:07:14 +0000 (0:00:02.724) 0:00:50.367 ********** 2026-03-01 01:08:10.761409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761417 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:08:10.761434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761456 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:08:10.761460 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:08:10.761464 | orchestrator | 2026-03-01 01:08:10.761468 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-01 01:08:10.761472 | orchestrator | Sunday 01 March 2026 01:07:15 +0000 (0:00:00.623) 0:00:50.991 ********** 2026-03-01 01:08:10.761477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761486 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:08:10.761493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761504 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:08:10.761508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761517 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:08:10.761521 | orchestrator | 2026-03-01 01:08:10.761525 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-01 01:08:10.761529 | orchestrator | Sunday 01 March 2026 01:07:16 +0000 (0:00:00.864) 0:00:51.856 ********** 2026-03-01 01:08:10.761533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761576 | orchestrator | 2026-03-01 01:08:10.761583 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-01 01:08:10.761589 | orchestrator | Sunday 01 March 2026 01:07:18 +0000 (0:00:02.005) 0:00:53.861 ********** 2026-03-01 01:08:10.761604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761659 | orchestrator | 2026-03-01 01:08:10.761665 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-01 01:08:10.761672 | orchestrator | Sunday 01 March 2026 01:07:23 +0000 (0:00:04.770) 0:00:58.632 ********** 2026-03-01 01:08:10.761678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761691 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:08:10.761701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761719 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:08:10.761730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-01 01:08:10.761737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:08:10.761744 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:08:10.761751 | orchestrator | 2026-03-01 01:08:10.761758 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-01 01:08:10.761765 | orchestrator | Sunday 01 March 2026 01:07:23 +0000 (0:00:00.603) 0:00:59.235 ********** 2026-03-01 01:08:10.761774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-01 01:08:10.761807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:08:10.761845 | orchestrator | 2026-03-01 01:08:10.761854 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-01 01:08:10.761864 | orchestrator | Sunday 01 March 2026 01:07:25 +0000 (0:00:02.278) 0:01:01.514 ********** 2026-03-01 01:08:10.761871 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:08:10.761878 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:08:10.761888 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:08:10.761895 | orchestrator | 2026-03-01 01:08:10.761901 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-01 01:08:10.761908 | orchestrator | Sunday 01 March 2026 01:07:26 +0000 (0:00:00.299) 0:01:01.813 ********** 2026-03-01 01:08:10.761915 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.761922 | orchestrator | 2026-03-01 01:08:10.761928 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-01 01:08:10.761935 | orchestrator | Sunday 01 March 2026 01:07:28 +0000 (0:00:01.859) 0:01:03.672 ********** 2026-03-01 01:08:10.761941 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.761947 | orchestrator | 2026-03-01 01:08:10.761954 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-01 01:08:10.761960 | orchestrator | Sunday 01 March 2026 01:07:30 +0000 (0:00:02.086) 0:01:05.759 ********** 2026-03-01 01:08:10.761967 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.761974 | orchestrator | 2026-03-01 01:08:10.761981 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-01 01:08:10.761987 | orchestrator | Sunday 01 March 2026 01:07:44 +0000 (0:00:14.609) 0:01:20.368 ********** 2026-03-01 01:08:10.761994 | orchestrator | 2026-03-01 01:08:10.762001 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-01 01:08:10.762007 | orchestrator | Sunday 01 March 2026 01:07:44 +0000 (0:00:00.077) 0:01:20.446 ********** 2026-03-01 01:08:10.762103 | orchestrator | 2026-03-01 01:08:10.762118 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-01 01:08:10.762124 | orchestrator | Sunday 01 March 2026 01:07:44 +0000 (0:00:00.060) 0:01:20.506 ********** 2026-03-01 01:08:10.762131 | orchestrator | 2026-03-01 01:08:10.762137 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-01 01:08:10.762144 | orchestrator | Sunday 01 March 2026 01:07:45 +0000 (0:00:00.066) 0:01:20.573 ********** 2026-03-01 01:08:10.762150 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.762157 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:08:10.762163 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:08:10.762169 | orchestrator | 2026-03-01 01:08:10.762175 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-01 01:08:10.762189 | orchestrator | Sunday 01 March 2026 01:07:59 +0000 (0:00:14.841) 0:01:35.414 ********** 2026-03-01 01:08:10.762196 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:08:10.762204 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:08:10.762211 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:08:10.762216 | orchestrator | 2026-03-01 01:08:10.762222 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:08:10.762229 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-01 01:08:10.762235 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:08:10.762242 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:08:10.762247 | orchestrator | 2026-03-01 01:08:10.762253 | orchestrator | 2026-03-01 01:08:10.762260 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:08:10.762266 | orchestrator | Sunday 01 March 2026 01:08:10 +0000 (0:00:10.524) 0:01:45.939 ********** 2026-03-01 01:08:10.762272 | orchestrator | =============================================================================== 2026-03-01 01:08:10.762278 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.84s 2026-03-01 01:08:10.762285 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.61s 2026-03-01 01:08:10.762291 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.52s 2026-03-01 01:08:10.762304 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.60s 2026-03-01 01:08:10.762311 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.77s 2026-03-01 01:08:10.762317 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.18s 2026-03-01 01:08:10.762324 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.96s 2026-03-01 01:08:10.762331 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.91s 2026-03-01 01:08:10.762337 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.89s 2026-03-01 01:08:10.762345 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.55s 2026-03-01 01:08:10.762351 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.32s 2026-03-01 01:08:10.762358 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.26s 2026-03-01 01:08:10.762365 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.25s 2026-03-01 01:08:10.762372 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.98s 2026-03-01 01:08:10.762378 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.79s 2026-03-01 01:08:10.762385 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.72s 2026-03-01 01:08:10.762397 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.28s 2026-03-01 01:08:10.762403 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.09s 2026-03-01 01:08:10.762410 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.01s 2026-03-01 01:08:10.762417 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.86s 2026-03-01 01:08:10.762424 | orchestrator | 2026-03-01 01:08:10 | INFO  | Task bfdfe714-c668-4e9b-bfa6-3dd353f03cbb is in state SUCCESS 2026-03-01 01:08:10.762432 | orchestrator | 2026-03-01 01:08:10 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:10.762438 | orchestrator | 2026-03-01 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:13.811633 | orchestrator | 2026-03-01 01:08:13 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:13.814177 | orchestrator | 2026-03-01 01:08:13 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:13.817162 | orchestrator | 2026-03-01 01:08:13 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:13.821277 | orchestrator | 2026-03-01 01:08:13 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:13.821902 | orchestrator | 2026-03-01 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:16.868574 | orchestrator | 2026-03-01 01:08:16 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:16.870158 | orchestrator | 2026-03-01 01:08:16 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:16.871071 | orchestrator | 2026-03-01 01:08:16 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:16.874165 | orchestrator | 2026-03-01 01:08:16 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:16.874200 | orchestrator | 2026-03-01 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:19.920937 | orchestrator | 2026-03-01 01:08:19 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:19.922129 | orchestrator | 2026-03-01 01:08:19 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:19.924320 | orchestrator | 2026-03-01 01:08:19 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:19.925347 | orchestrator | 2026-03-01 01:08:19 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:19.925385 | orchestrator | 2026-03-01 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:22.975995 | orchestrator | 2026-03-01 01:08:22 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:22.976574 | orchestrator | 2026-03-01 01:08:22 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:22.977758 | orchestrator | 2026-03-01 01:08:22 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:22.978929 | orchestrator | 2026-03-01 01:08:22 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:22.978968 | orchestrator | 2026-03-01 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:26.018707 | orchestrator | 2026-03-01 01:08:26 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:26.019614 | orchestrator | 2026-03-01 01:08:26 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:26.020987 | orchestrator | 2026-03-01 01:08:26 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:26.024319 | orchestrator | 2026-03-01 01:08:26 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:26.024397 | orchestrator | 2026-03-01 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:29.057164 | orchestrator | 2026-03-01 01:08:29 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:29.057840 | orchestrator | 2026-03-01 01:08:29 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:29.059781 | orchestrator | 2026-03-01 01:08:29 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:29.060912 | orchestrator | 2026-03-01 01:08:29 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:29.060948 | orchestrator | 2026-03-01 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:32.092313 | orchestrator | 2026-03-01 01:08:32 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:32.092438 | orchestrator | 2026-03-01 01:08:32 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:32.093343 | orchestrator | 2026-03-01 01:08:32 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:32.093858 | orchestrator | 2026-03-01 01:08:32 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:32.093879 | orchestrator | 2026-03-01 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:35.130277 | orchestrator | 2026-03-01 01:08:35 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:35.131347 | orchestrator | 2026-03-01 01:08:35 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:35.132022 | orchestrator | 2026-03-01 01:08:35 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:35.132839 | orchestrator | 2026-03-01 01:08:35 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:35.132884 | orchestrator | 2026-03-01 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:38.169083 | orchestrator | 2026-03-01 01:08:38 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:38.170996 | orchestrator | 2026-03-01 01:08:38 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:38.173890 | orchestrator | 2026-03-01 01:08:38 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:38.175721 | orchestrator | 2026-03-01 01:08:38 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:38.175767 | orchestrator | 2026-03-01 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:41.203684 | orchestrator | 2026-03-01 01:08:41 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:41.204474 | orchestrator | 2026-03-01 01:08:41 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:41.207739 | orchestrator | 2026-03-01 01:08:41 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:41.207792 | orchestrator | 2026-03-01 01:08:41 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:41.207801 | orchestrator | 2026-03-01 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:44.236845 | orchestrator | 2026-03-01 01:08:44 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:44.238597 | orchestrator | 2026-03-01 01:08:44 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:44.238649 | orchestrator | 2026-03-01 01:08:44 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:44.241952 | orchestrator | 2026-03-01 01:08:44 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:44.242062 | orchestrator | 2026-03-01 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:47.274532 | orchestrator | 2026-03-01 01:08:47 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:47.274919 | orchestrator | 2026-03-01 01:08:47 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:47.275694 | orchestrator | 2026-03-01 01:08:47 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:47.276367 | orchestrator | 2026-03-01 01:08:47 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:47.276395 | orchestrator | 2026-03-01 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:50.314533 | orchestrator | 2026-03-01 01:08:50 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:50.316185 | orchestrator | 2026-03-01 01:08:50 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:50.318377 | orchestrator | 2026-03-01 01:08:50 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:50.326880 | orchestrator | 2026-03-01 01:08:50 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:50.326930 | orchestrator | 2026-03-01 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:53.354370 | orchestrator | 2026-03-01 01:08:53 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:53.354745 | orchestrator | 2026-03-01 01:08:53 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:53.355684 | orchestrator | 2026-03-01 01:08:53 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:53.356814 | orchestrator | 2026-03-01 01:08:53 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:53.356846 | orchestrator | 2026-03-01 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:56.408099 | orchestrator | 2026-03-01 01:08:56 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:56.410871 | orchestrator | 2026-03-01 01:08:56 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:56.413388 | orchestrator | 2026-03-01 01:08:56 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:56.415303 | orchestrator | 2026-03-01 01:08:56 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:56.415384 | orchestrator | 2026-03-01 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:08:59.472035 | orchestrator | 2026-03-01 01:08:59 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:08:59.472092 | orchestrator | 2026-03-01 01:08:59 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:08:59.472109 | orchestrator | 2026-03-01 01:08:59 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:08:59.472121 | orchestrator | 2026-03-01 01:08:59 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:08:59.472128 | orchestrator | 2026-03-01 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:02.504352 | orchestrator | 2026-03-01 01:09:02 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:02.504810 | orchestrator | 2026-03-01 01:09:02 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:02.505305 | orchestrator | 2026-03-01 01:09:02 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:02.506062 | orchestrator | 2026-03-01 01:09:02 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:02.506097 | orchestrator | 2026-03-01 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:05.526359 | orchestrator | 2026-03-01 01:09:05 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:05.526457 | orchestrator | 2026-03-01 01:09:05 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:05.527075 | orchestrator | 2026-03-01 01:09:05 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:05.528212 | orchestrator | 2026-03-01 01:09:05 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:05.528270 | orchestrator | 2026-03-01 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:08.564095 | orchestrator | 2026-03-01 01:09:08 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:08.564272 | orchestrator | 2026-03-01 01:09:08 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:08.565192 | orchestrator | 2026-03-01 01:09:08 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:08.565548 | orchestrator | 2026-03-01 01:09:08 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:08.565570 | orchestrator | 2026-03-01 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:11.589347 | orchestrator | 2026-03-01 01:09:11 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:11.589490 | orchestrator | 2026-03-01 01:09:11 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:11.590817 | orchestrator | 2026-03-01 01:09:11 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:11.591374 | orchestrator | 2026-03-01 01:09:11 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:11.591402 | orchestrator | 2026-03-01 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:14.614507 | orchestrator | 2026-03-01 01:09:14 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:14.614836 | orchestrator | 2026-03-01 01:09:14 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:14.616357 | orchestrator | 2026-03-01 01:09:14 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:14.616981 | orchestrator | 2026-03-01 01:09:14 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:14.617008 | orchestrator | 2026-03-01 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:17.669767 | orchestrator | 2026-03-01 01:09:17 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:17.672288 | orchestrator | 2026-03-01 01:09:17 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:17.676396 | orchestrator | 2026-03-01 01:09:17 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:17.676588 | orchestrator | 2026-03-01 01:09:17 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:17.676662 | orchestrator | 2026-03-01 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:20.709325 | orchestrator | 2026-03-01 01:09:20 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:20.710263 | orchestrator | 2026-03-01 01:09:20 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:20.710401 | orchestrator | 2026-03-01 01:09:20 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:20.712706 | orchestrator | 2026-03-01 01:09:20 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:20.712756 | orchestrator | 2026-03-01 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:23.738932 | orchestrator | 2026-03-01 01:09:23 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:23.739083 | orchestrator | 2026-03-01 01:09:23 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:23.739703 | orchestrator | 2026-03-01 01:09:23 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:23.740283 | orchestrator | 2026-03-01 01:09:23 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:23.740318 | orchestrator | 2026-03-01 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:26.773380 | orchestrator | 2026-03-01 01:09:26 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:26.773453 | orchestrator | 2026-03-01 01:09:26 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:26.773970 | orchestrator | 2026-03-01 01:09:26 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:26.775342 | orchestrator | 2026-03-01 01:09:26 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:26.775382 | orchestrator | 2026-03-01 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:29.806329 | orchestrator | 2026-03-01 01:09:29 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:29.808162 | orchestrator | 2026-03-01 01:09:29 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:29.808666 | orchestrator | 2026-03-01 01:09:29 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:29.809351 | orchestrator | 2026-03-01 01:09:29 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:29.809379 | orchestrator | 2026-03-01 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:32.835029 | orchestrator | 2026-03-01 01:09:32 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:32.837377 | orchestrator | 2026-03-01 01:09:32 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:32.839367 | orchestrator | 2026-03-01 01:09:32 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:32.841091 | orchestrator | 2026-03-01 01:09:32 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:32.841145 | orchestrator | 2026-03-01 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:35.895465 | orchestrator | 2026-03-01 01:09:35 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:35.897963 | orchestrator | 2026-03-01 01:09:35 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:35.901134 | orchestrator | 2026-03-01 01:09:35 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:35.903610 | orchestrator | 2026-03-01 01:09:35 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:35.903644 | orchestrator | 2026-03-01 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:38.948326 | orchestrator | 2026-03-01 01:09:38 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:38.949682 | orchestrator | 2026-03-01 01:09:38 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:38.951160 | orchestrator | 2026-03-01 01:09:38 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:38.952557 | orchestrator | 2026-03-01 01:09:38 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:38.953090 | orchestrator | 2026-03-01 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:41.996064 | orchestrator | 2026-03-01 01:09:41 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:41.998423 | orchestrator | 2026-03-01 01:09:41 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:42.000796 | orchestrator | 2026-03-01 01:09:42 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:42.003269 | orchestrator | 2026-03-01 01:09:42 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:42.003329 | orchestrator | 2026-03-01 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:45.062265 | orchestrator | 2026-03-01 01:09:45 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state STARTED 2026-03-01 01:09:45.064852 | orchestrator | 2026-03-01 01:09:45 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:45.068136 | orchestrator | 2026-03-01 01:09:45 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:45.070714 | orchestrator | 2026-03-01 01:09:45 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:45.071204 | orchestrator | 2026-03-01 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:48.130214 | orchestrator | 2026-03-01 01:09:48 | INFO  | Task efe11489-443a-46c5-a84c-1b1d5195d950 is in state SUCCESS 2026-03-01 01:09:48.131459 | orchestrator | 2026-03-01 01:09:48.131519 | orchestrator | 2026-03-01 01:09:48.131527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:09:48.131535 | orchestrator | 2026-03-01 01:09:48.131543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:09:48.131550 | orchestrator | Sunday 01 March 2026 01:06:32 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-01 01:09:48.131558 | orchestrator | ok: [testbed-manager] 2026-03-01 01:09:48.131609 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:09:48.131616 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:09:48.131624 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:09:48.131631 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:09:48.131638 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:09:48.131645 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:09:48.131660 | orchestrator | 2026-03-01 01:09:48.131672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:09:48.131679 | orchestrator | Sunday 01 March 2026 01:06:33 +0000 (0:00:00.889) 0:00:01.173 ********** 2026-03-01 01:09:48.131704 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131713 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131719 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131726 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131734 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131741 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131763 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-01 01:09:48.131770 | orchestrator | 2026-03-01 01:09:48.131777 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-01 01:09:48.131784 | orchestrator | 2026-03-01 01:09:48.131821 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-01 01:09:48.131828 | orchestrator | Sunday 01 March 2026 01:06:34 +0000 (0:00:00.954) 0:00:02.128 ********** 2026-03-01 01:09:48.131835 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:09:48.131842 | orchestrator | 2026-03-01 01:09:48.131849 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-01 01:09:48.131889 | orchestrator | Sunday 01 March 2026 01:06:35 +0000 (0:00:01.607) 0:00:03.735 ********** 2026-03-01 01:09:48.131915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 01:09:48.131923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.131946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.131953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.131968 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.131973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.131977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.131984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.131988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.131992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132000 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132039 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 01:09:48.132045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132108 | orchestrator | 2026-03-01 01:09:48.132112 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-01 01:09:48.132116 | orchestrator | Sunday 01 March 2026 01:06:39 +0000 (0:00:03.670) 0:00:07.406 ********** 2026-03-01 01:09:48.132120 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:09:48.132124 | orchestrator | 2026-03-01 01:09:48.132128 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-01 01:09:48.132132 | orchestrator | Sunday 01 March 2026 01:06:40 +0000 (0:00:01.270) 0:00:08.677 ********** 2026-03-01 01:09:48.132136 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 01:09:48.132182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132213 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.132222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132249 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 01:09:48.132289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.132311 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.132691 | orchestrator | 2026-03-01 01:09:48.132695 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-01 01:09:48.132699 | orchestrator | Sunday 01 March 2026 01:06:46 +0000 (0:00:05.430) 0:00:14.107 ********** 2026-03-01 01:09:48.132703 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-01 01:09:48.132714 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132719 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-01 01:09:48.132738 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132743 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.132748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.132827 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.132831 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.132835 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.132848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132879 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.132891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132909 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.132913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.132917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.132969 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.132973 | orchestrator | 2026-03-01 01:09:48.133027 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-01 01:09:48.133031 | orchestrator | Sunday 01 March 2026 01:06:48 +0000 (0:00:02.333) 0:00:16.441 ********** 2026-03-01 01:09:48.133035 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-01 01:09:48.133042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133056 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-01 01:09:48.133098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133102 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133106 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.133110 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.133114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133161 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.133165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-01 01:09:48.133176 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.133188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133201 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.133205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133219 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.133223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-01 01:09:48.133230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-01 01:09:48.133360 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.133364 | orchestrator | 2026-03-01 01:09:48.133367 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-01 01:09:48.133371 | orchestrator | Sunday 01 March 2026 01:06:50 +0000 (0:00:02.240) 0:00:18.681 ********** 2026-03-01 01:09:48.133375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 01:09:48.133379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.133426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133446 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133481 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 01:09:48.133520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133614 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.133633 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.133652 | orchestrator | 2026-03-01 01:09:48.133656 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-01 01:09:48.133660 | orchestrator | Sunday 01 March 2026 01:06:56 +0000 (0:00:05.532) 0:00:24.214 ********** 2026-03-01 01:09:48.133664 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 01:09:48.133668 | orchestrator | 2026-03-01 01:09:48.133672 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-01 01:09:48.133686 | orchestrator | Sunday 01 March 2026 01:06:57 +0000 (0:00:01.118) 0:00:25.332 ********** 2026-03-01 01:09:48.133690 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133695 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133699 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133707 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.133715 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133730 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133735 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133739 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133742 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1333449, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.616154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133751 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133756 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133760 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133782 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133786 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133797 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1333483, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6229758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.133805 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133823 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133827 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133833 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133840 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133847 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133851 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133883 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133888 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133895 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133901 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133905 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133909 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1333440, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.614587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.133913 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133927 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133932 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133938 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133944 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133948 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133952 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133956 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133970 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133975 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133982 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133992 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.133996 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134000 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1333476, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6209304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134052 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134062 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134066 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134072 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134076 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134080 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134084 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134101 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134108 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134118 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1333436, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134122 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134126 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134130 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134144 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134152 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134156 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134162 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134166 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134169 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134173 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134190 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134195 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134199 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134209 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1333460, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6176288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134212 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134216 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134233 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134238 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134242 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134248 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134253 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134258 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134262 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134272 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134277 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134288 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134293 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134298 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134302 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.134310 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134318 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1333471, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6207743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134335 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134340 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134345 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134352 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134359 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134364 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134368 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134375 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134380 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134392 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.134397 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134404 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1333462, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6178837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134409 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134413 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134420 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134425 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134429 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134436 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.134441 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134445 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.134452 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134457 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134461 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.134466 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-01 01:09:48.134477 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.134482 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1333445, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6159081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134490 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333482, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6225653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134495 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333430, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.613458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134502 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1333499, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6266456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134507 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1333480, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.621996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134511 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1333438, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6141737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1333433, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6137483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134523 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1333469, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6199493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134532 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1333465, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6192982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134537 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1333494, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.624983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-01 01:09:48.134542 | orchestrator | 2026-03-01 01:09:48.134546 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-01 01:09:48.134551 | orchestrator | Sunday 01 March 2026 01:07:25 +0000 (0:00:28.525) 0:00:53.858 ********** 2026-03-01 01:09:48.134556 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 01:09:48.134560 | orchestrator | 2026-03-01 01:09:48.134566 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-01 01:09:48.134571 | orchestrator | Sunday 01 March 2026 01:07:26 +0000 (0:00:00.795) 0:00:54.653 ********** 2026-03-01 01:09:48.134575 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134580 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134585 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134594 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134606 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:09:48.134615 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134620 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134624 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134629 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134633 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134638 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 01:09:48.134642 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134652 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134661 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134665 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-01 01:09:48.134670 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134682 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134687 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134691 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134695 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-01 01:09:48.134699 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134703 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134707 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134716 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134720 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 01:09:48.134724 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134727 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134731 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134735 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134738 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134742 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-01 01:09:48.134746 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.134750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134753 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-01 01:09:48.134757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-01 01:09:48.134761 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-01 01:09:48.134765 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-01 01:09:48.134768 | orchestrator | 2026-03-01 01:09:48.134772 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-01 01:09:48.134776 | orchestrator | Sunday 01 March 2026 01:07:28 +0000 (0:00:01.759) 0:00:56.413 ********** 2026-03-01 01:09:48.134780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-01 01:09:48.134784 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.134787 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-01 01:09:48.134791 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.134795 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-01 01:09:48.134799 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.134802 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-01 01:09:48.134806 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.134810 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-01 01:09:48.134814 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.134818 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-01 01:09:48.134821 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.134825 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-01 01:09:48.134829 | orchestrator | 2026-03-01 01:09:48.134833 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-01 01:09:48.134836 | orchestrator | Sunday 01 March 2026 01:07:42 +0000 (0:00:14.482) 0:01:10.895 ********** 2026-03-01 01:09:48.134840 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-01 01:09:48.134846 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.134853 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-01 01:09:48.134939 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.134944 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-01 01:09:48.134948 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.134951 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-01 01:09:48.134955 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.134959 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-01 01:09:48.134963 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.134967 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-01 01:09:48.134970 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.134974 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-01 01:09:48.134978 | orchestrator | 2026-03-01 01:09:48.134982 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-01 01:09:48.134986 | orchestrator | Sunday 01 March 2026 01:07:45 +0000 (0:00:02.859) 0:01:13.754 ********** 2026-03-01 01:09:48.134990 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-01 01:09:48.134994 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-01 01:09:48.134998 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-01 01:09:48.135002 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.135005 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.135009 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.135013 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-01 01:09:48.135017 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-01 01:09:48.135021 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135028 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-01 01:09:48.135032 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-01 01:09:48.135036 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135040 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135043 | orchestrator | 2026-03-01 01:09:48.135047 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-01 01:09:48.135051 | orchestrator | Sunday 01 March 2026 01:07:48 +0000 (0:00:02.573) 0:01:16.328 ********** 2026-03-01 01:09:48.135055 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 01:09:48.135058 | orchestrator | 2026-03-01 01:09:48.135062 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-01 01:09:48.135066 | orchestrator | Sunday 01 March 2026 01:07:49 +0000 (0:00:00.844) 0:01:17.173 ********** 2026-03-01 01:09:48.135070 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.135074 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.135078 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.135081 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.135085 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135089 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135093 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135096 | orchestrator | 2026-03-01 01:09:48.135100 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-01 01:09:48.135107 | orchestrator | Sunday 01 March 2026 01:07:50 +0000 (0:00:00.832) 0:01:18.005 ********** 2026-03-01 01:09:48.135111 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.135115 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135119 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135122 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135126 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:09:48.135130 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:09:48.135134 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:09:48.135137 | orchestrator | 2026-03-01 01:09:48.135141 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-01 01:09:48.135145 | orchestrator | Sunday 01 March 2026 01:07:52 +0000 (0:00:02.470) 0:01:20.476 ********** 2026-03-01 01:09:48.135149 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135153 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.135156 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135160 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.135164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135168 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.135171 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135175 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.135182 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135186 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135189 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135193 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135197 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-01 01:09:48.135201 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135204 | orchestrator | 2026-03-01 01:09:48.135208 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-01 01:09:48.135212 | orchestrator | Sunday 01 March 2026 01:07:53 +0000 (0:00:01.461) 0:01:21.937 ********** 2026-03-01 01:09:48.135216 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-01 01:09:48.135220 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.135223 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-01 01:09:48.135227 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.135231 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-01 01:09:48.135235 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.135238 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-01 01:09:48.135295 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135299 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-01 01:09:48.135303 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135307 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-01 01:09:48.135311 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135314 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-01 01:09:48.135318 | orchestrator | 2026-03-01 01:09:48.135322 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-01 01:09:48.135332 | orchestrator | Sunday 01 March 2026 01:07:55 +0000 (0:00:01.852) 0:01:23.790 ********** 2026-03-01 01:09:48.135335 | orchestrator | [WARNING]: Skipped 2026-03-01 01:09:48.135339 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-01 01:09:48.135346 | orchestrator | due to this access issue: 2026-03-01 01:09:48.135349 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-01 01:09:48.135354 | orchestrator | not a directory 2026-03-01 01:09:48.135357 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-01 01:09:48.135361 | orchestrator | 2026-03-01 01:09:48.135365 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-01 01:09:48.135369 | orchestrator | Sunday 01 March 2026 01:07:56 +0000 (0:00:01.040) 0:01:24.830 ********** 2026-03-01 01:09:48.135373 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.135376 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.135380 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.135384 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.135387 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135391 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135395 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135399 | orchestrator | 2026-03-01 01:09:48.135402 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-01 01:09:48.135406 | orchestrator | Sunday 01 March 2026 01:07:57 +0000 (0:00:00.761) 0:01:25.591 ********** 2026-03-01 01:09:48.135410 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.135414 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:09:48.135417 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:09:48.135421 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:09:48.135425 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:09:48.135428 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:09:48.135432 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:09:48.135436 | orchestrator | 2026-03-01 01:09:48.135440 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-01 01:09:48.135443 | orchestrator | Sunday 01 March 2026 01:07:58 +0000 (0:00:00.729) 0:01:26.320 ********** 2026-03-01 01:09:48.135448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135476 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-01 01:09:48.135481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-01 01:09:48.135508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135547 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-01 01:09:48.135570 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-01 01:09:48.135578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135592 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-01 01:09:48.135596 | orchestrator | 2026-03-01 01:09:48.135600 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-01 01:09:48.135603 | orchestrator | Sunday 01 March 2026 01:08:02 +0000 (0:00:04.524) 0:01:30.845 ********** 2026-03-01 01:09:48.135607 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-01 01:09:48.135611 | orchestrator | skipping: [testbed-manager] 2026-03-01 01:09:48.135615 | orchestrator | 2026-03-01 01:09:48.135619 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135622 | orchestrator | Sunday 01 March 2026 01:08:03 +0000 (0:00:01.023) 0:01:31.869 ********** 2026-03-01 01:09:48.135626 | orchestrator | 2026-03-01 01:09:48.135630 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135634 | orchestrator | Sunday 01 March 2026 01:08:03 +0000 (0:00:00.065) 0:01:31.934 ********** 2026-03-01 01:09:48.135637 | orchestrator | 2026-03-01 01:09:48.135641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135645 | orchestrator | Sunday 01 March 2026 01:08:04 +0000 (0:00:00.082) 0:01:32.017 ********** 2026-03-01 01:09:48.135649 | orchestrator | 2026-03-01 01:09:48.135652 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135656 | orchestrator | Sunday 01 March 2026 01:08:04 +0000 (0:00:00.078) 0:01:32.095 ********** 2026-03-01 01:09:48.135660 | orchestrator | 2026-03-01 01:09:48.135664 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135667 | orchestrator | Sunday 01 March 2026 01:08:04 +0000 (0:00:00.169) 0:01:32.265 ********** 2026-03-01 01:09:48.135675 | orchestrator | 2026-03-01 01:09:48.135679 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135682 | orchestrator | Sunday 01 March 2026 01:08:04 +0000 (0:00:00.058) 0:01:32.323 ********** 2026-03-01 01:09:48.135686 | orchestrator | 2026-03-01 01:09:48.135690 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-01 01:09:48.135694 | orchestrator | Sunday 01 March 2026 01:08:04 +0000 (0:00:00.058) 0:01:32.382 ********** 2026-03-01 01:09:48.135697 | orchestrator | 2026-03-01 01:09:48.135701 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-01 01:09:48.135705 | orchestrator | Sunday 01 March 2026 01:08:04 +0000 (0:00:00.080) 0:01:32.463 ********** 2026-03-01 01:09:48.135709 | orchestrator | changed: [testbed-manager] 2026-03-01 01:09:48.135712 | orchestrator | 2026-03-01 01:09:48.135716 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-01 01:09:48.135722 | orchestrator | Sunday 01 March 2026 01:08:27 +0000 (0:00:22.714) 0:01:55.178 ********** 2026-03-01 01:09:48.135726 | orchestrator | changed: [testbed-manager] 2026-03-01 01:09:48.135730 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:09:48.135733 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:09:48.135737 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:09:48.135741 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:09:48.135744 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:09:48.135748 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:09:48.135752 | orchestrator | 2026-03-01 01:09:48.135756 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-01 01:09:48.135759 | orchestrator | Sunday 01 March 2026 01:08:42 +0000 (0:00:14.932) 0:02:10.110 ********** 2026-03-01 01:09:48.135763 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:09:48.135767 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:09:48.135771 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:09:48.135774 | orchestrator | 2026-03-01 01:09:48.135778 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-01 01:09:48.135782 | orchestrator | Sunday 01 March 2026 01:08:49 +0000 (0:00:06.879) 0:02:16.989 ********** 2026-03-01 01:09:48.135786 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:09:48.135789 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:09:48.135793 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:09:48.135797 | orchestrator | 2026-03-01 01:09:48.135801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-01 01:09:48.135804 | orchestrator | Sunday 01 March 2026 01:08:59 +0000 (0:00:10.468) 0:02:27.457 ********** 2026-03-01 01:09:48.135808 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:09:48.135812 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:09:48.135816 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:09:48.135819 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:09:48.135823 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:09:48.135827 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:09:48.135831 | orchestrator | changed: [testbed-manager] 2026-03-01 01:09:48.135834 | orchestrator | 2026-03-01 01:09:48.135838 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-01 01:09:48.135842 | orchestrator | Sunday 01 March 2026 01:09:13 +0000 (0:00:13.746) 0:02:41.204 ********** 2026-03-01 01:09:48.135846 | orchestrator | changed: [testbed-manager] 2026-03-01 01:09:48.135849 | orchestrator | 2026-03-01 01:09:48.135853 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-01 01:09:48.135871 | orchestrator | Sunday 01 March 2026 01:09:20 +0000 (0:00:06.854) 0:02:48.058 ********** 2026-03-01 01:09:48.135878 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:09:48.135884 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:09:48.135889 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:09:48.135895 | orchestrator | 2026-03-01 01:09:48.135901 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-01 01:09:48.135910 | orchestrator | Sunday 01 March 2026 01:09:32 +0000 (0:00:12.138) 0:03:00.197 ********** 2026-03-01 01:09:48.135920 | orchestrator | changed: [testbed-manager] 2026-03-01 01:09:48.135926 | orchestrator | 2026-03-01 01:09:48.135933 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-01 01:09:48.135939 | orchestrator | Sunday 01 March 2026 01:09:37 +0000 (0:00:05.153) 0:03:05.350 ********** 2026-03-01 01:09:48.135945 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:09:48.135951 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:09:48.135956 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:09:48.135962 | orchestrator | 2026-03-01 01:09:48.135968 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:09:48.135975 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-01 01:09:48.135982 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-01 01:09:48.135989 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-01 01:09:48.135995 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-01 01:09:48.136002 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-01 01:09:48.136021 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-01 01:09:48.136028 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-01 01:09:48.136034 | orchestrator | 2026-03-01 01:09:48.136040 | orchestrator | 2026-03-01 01:09:48.136046 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:09:48.136053 | orchestrator | Sunday 01 March 2026 01:09:47 +0000 (0:00:10.201) 0:03:15.552 ********** 2026-03-01 01:09:48.136060 | orchestrator | =============================================================================== 2026-03-01 01:09:48.136064 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.53s 2026-03-01 01:09:48.136069 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.71s 2026-03-01 01:09:48.136073 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.93s 2026-03-01 01:09:48.136078 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.48s 2026-03-01 01:09:48.136082 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.75s 2026-03-01 01:09:48.136090 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.14s 2026-03-01 01:09:48.136095 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.47s 2026-03-01 01:09:48.136099 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.20s 2026-03-01 01:09:48.136104 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.88s 2026-03-01 01:09:48.136108 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.85s 2026-03-01 01:09:48.136113 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.53s 2026-03-01 01:09:48.136117 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.43s 2026-03-01 01:09:48.136121 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.15s 2026-03-01 01:09:48.136126 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.52s 2026-03-01 01:09:48.136130 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.67s 2026-03-01 01:09:48.136135 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.86s 2026-03-01 01:09:48.136142 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.57s 2026-03-01 01:09:48.136146 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.47s 2026-03-01 01:09:48.136151 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.33s 2026-03-01 01:09:48.136155 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.24s 2026-03-01 01:09:48.136160 | orchestrator | 2026-03-01 01:09:48 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:48.136165 | orchestrator | 2026-03-01 01:09:48 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:48.137718 | orchestrator | 2026-03-01 01:09:48 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:48.138079 | orchestrator | 2026-03-01 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:51.193270 | orchestrator | 2026-03-01 01:09:51 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:51.194846 | orchestrator | 2026-03-01 01:09:51 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:09:51.196406 | orchestrator | 2026-03-01 01:09:51 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:51.197839 | orchestrator | 2026-03-01 01:09:51 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:51.197998 | orchestrator | 2026-03-01 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:54.247448 | orchestrator | 2026-03-01 01:09:54 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:54.250222 | orchestrator | 2026-03-01 01:09:54 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:09:54.252771 | orchestrator | 2026-03-01 01:09:54 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:54.254458 | orchestrator | 2026-03-01 01:09:54 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:54.254507 | orchestrator | 2026-03-01 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:09:57.306873 | orchestrator | 2026-03-01 01:09:57 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:09:57.309660 | orchestrator | 2026-03-01 01:09:57 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:09:57.312985 | orchestrator | 2026-03-01 01:09:57 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:09:57.314396 | orchestrator | 2026-03-01 01:09:57 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:09:57.314433 | orchestrator | 2026-03-01 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:00.388379 | orchestrator | 2026-03-01 01:10:00 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:00.389231 | orchestrator | 2026-03-01 01:10:00 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:00.390953 | orchestrator | 2026-03-01 01:10:00 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:00.392070 | orchestrator | 2026-03-01 01:10:00 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:00.392109 | orchestrator | 2026-03-01 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:03.443048 | orchestrator | 2026-03-01 01:10:03 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:03.445436 | orchestrator | 2026-03-01 01:10:03 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:03.447282 | orchestrator | 2026-03-01 01:10:03 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:03.448871 | orchestrator | 2026-03-01 01:10:03 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:03.448909 | orchestrator | 2026-03-01 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:06.491424 | orchestrator | 2026-03-01 01:10:06 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:06.494127 | orchestrator | 2026-03-01 01:10:06 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:06.496296 | orchestrator | 2026-03-01 01:10:06 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:06.498715 | orchestrator | 2026-03-01 01:10:06 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:06.498763 | orchestrator | 2026-03-01 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:09.548784 | orchestrator | 2026-03-01 01:10:09 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:09.550608 | orchestrator | 2026-03-01 01:10:09 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:09.552657 | orchestrator | 2026-03-01 01:10:09 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:09.554717 | orchestrator | 2026-03-01 01:10:09 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:09.554748 | orchestrator | 2026-03-01 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:12.588107 | orchestrator | 2026-03-01 01:10:12 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:12.589275 | orchestrator | 2026-03-01 01:10:12 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:12.590376 | orchestrator | 2026-03-01 01:10:12 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:12.591487 | orchestrator | 2026-03-01 01:10:12 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:12.591508 | orchestrator | 2026-03-01 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:15.638248 | orchestrator | 2026-03-01 01:10:15 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:15.638727 | orchestrator | 2026-03-01 01:10:15 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:15.639539 | orchestrator | 2026-03-01 01:10:15 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:15.640380 | orchestrator | 2026-03-01 01:10:15 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:15.640420 | orchestrator | 2026-03-01 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:18.672542 | orchestrator | 2026-03-01 01:10:18 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:18.672599 | orchestrator | 2026-03-01 01:10:18 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:18.672607 | orchestrator | 2026-03-01 01:10:18 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:18.672614 | orchestrator | 2026-03-01 01:10:18 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:18.672639 | orchestrator | 2026-03-01 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:21.711124 | orchestrator | 2026-03-01 01:10:21 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:21.711422 | orchestrator | 2026-03-01 01:10:21 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:21.712686 | orchestrator | 2026-03-01 01:10:21 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:21.715632 | orchestrator | 2026-03-01 01:10:21 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:21.715764 | orchestrator | 2026-03-01 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:24.758429 | orchestrator | 2026-03-01 01:10:24 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:24.759437 | orchestrator | 2026-03-01 01:10:24 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:24.760068 | orchestrator | 2026-03-01 01:10:24 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:24.760862 | orchestrator | 2026-03-01 01:10:24 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:24.760897 | orchestrator | 2026-03-01 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:27.811928 | orchestrator | 2026-03-01 01:10:27 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:27.812719 | orchestrator | 2026-03-01 01:10:27 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:27.814661 | orchestrator | 2026-03-01 01:10:27 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:27.817616 | orchestrator | 2026-03-01 01:10:27 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:27.817719 | orchestrator | 2026-03-01 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:30.861566 | orchestrator | 2026-03-01 01:10:30 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:30.862866 | orchestrator | 2026-03-01 01:10:30 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:30.865182 | orchestrator | 2026-03-01 01:10:30 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:30.866775 | orchestrator | 2026-03-01 01:10:30 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:30.866839 | orchestrator | 2026-03-01 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:33.914491 | orchestrator | 2026-03-01 01:10:33 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:33.916803 | orchestrator | 2026-03-01 01:10:33 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:33.918978 | orchestrator | 2026-03-01 01:10:33 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:33.920718 | orchestrator | 2026-03-01 01:10:33 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:33.920914 | orchestrator | 2026-03-01 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:36.965506 | orchestrator | 2026-03-01 01:10:36 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:36.968062 | orchestrator | 2026-03-01 01:10:36 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:36.970186 | orchestrator | 2026-03-01 01:10:36 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:36.972198 | orchestrator | 2026-03-01 01:10:36 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state STARTED 2026-03-01 01:10:36.972239 | orchestrator | 2026-03-01 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:40.033385 | orchestrator | 2026-03-01 01:10:40 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:40.033434 | orchestrator | 2026-03-01 01:10:40 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:40.034046 | orchestrator | 2026-03-01 01:10:40 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:40.039253 | orchestrator | 2026-03-01 01:10:40.039308 | orchestrator | 2026-03-01 01:10:40.039317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:10:40.039324 | orchestrator | 2026-03-01 01:10:40.039330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:10:40.039430 | orchestrator | Sunday 01 March 2026 01:07:55 +0000 (0:00:00.265) 0:00:00.265 ********** 2026-03-01 01:10:40.039439 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:10:40.039446 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:10:40.039452 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:10:40.039459 | orchestrator | 2026-03-01 01:10:40.039465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:10:40.039472 | orchestrator | Sunday 01 March 2026 01:07:55 +0000 (0:00:00.364) 0:00:00.630 ********** 2026-03-01 01:10:40.039478 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-01 01:10:40.039484 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-01 01:10:40.039491 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-01 01:10:40.039497 | orchestrator | 2026-03-01 01:10:40.039504 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-01 01:10:40.039510 | orchestrator | 2026-03-01 01:10:40.039517 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-01 01:10:40.039523 | orchestrator | Sunday 01 March 2026 01:07:55 +0000 (0:00:00.327) 0:00:00.957 ********** 2026-03-01 01:10:40.039530 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:10:40.039537 | orchestrator | 2026-03-01 01:10:40.039544 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-01 01:10:40.039550 | orchestrator | Sunday 01 March 2026 01:07:56 +0000 (0:00:00.465) 0:00:01.423 ********** 2026-03-01 01:10:40.039557 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-01 01:10:40.039564 | orchestrator | 2026-03-01 01:10:40.039570 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-01 01:10:40.039577 | orchestrator | Sunday 01 March 2026 01:07:59 +0000 (0:00:03.173) 0:00:04.596 ********** 2026-03-01 01:10:40.039584 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-01 01:10:40.039591 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-01 01:10:40.039597 | orchestrator | 2026-03-01 01:10:40.039604 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-01 01:10:40.039610 | orchestrator | Sunday 01 March 2026 01:08:05 +0000 (0:00:06.082) 0:00:10.679 ********** 2026-03-01 01:10:40.039616 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:10:40.039623 | orchestrator | 2026-03-01 01:10:40.039629 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-01 01:10:40.039635 | orchestrator | Sunday 01 March 2026 01:08:08 +0000 (0:00:03.149) 0:00:13.828 ********** 2026-03-01 01:10:40.039642 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-01 01:10:40.039649 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:10:40.039656 | orchestrator | 2026-03-01 01:10:40.039674 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-01 01:10:40.039681 | orchestrator | Sunday 01 March 2026 01:08:12 +0000 (0:00:03.543) 0:00:17.371 ********** 2026-03-01 01:10:40.039688 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:10:40.039694 | orchestrator | 2026-03-01 01:10:40.039700 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-01 01:10:40.039706 | orchestrator | Sunday 01 March 2026 01:08:15 +0000 (0:00:03.388) 0:00:20.760 ********** 2026-03-01 01:10:40.039712 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-01 01:10:40.039718 | orchestrator | 2026-03-01 01:10:40.039724 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-01 01:10:40.039730 | orchestrator | Sunday 01 March 2026 01:08:19 +0000 (0:00:03.557) 0:00:24.318 ********** 2026-03-01 01:10:40.039753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.039777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.039794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.039822 | orchestrator | 2026-03-01 01:10:40.039829 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-01 01:10:40.039833 | orchestrator | Sunday 01 March 2026 01:08:23 +0000 (0:00:04.016) 0:00:28.334 ********** 2026-03-01 01:10:40.039837 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:10:40.039841 | orchestrator | 2026-03-01 01:10:40.039845 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-01 01:10:40.039853 | orchestrator | Sunday 01 March 2026 01:08:23 +0000 (0:00:00.692) 0:00:29.027 ********** 2026-03-01 01:10:40.039857 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:10:40.039861 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.039865 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:10:40.039868 | orchestrator | 2026-03-01 01:10:40.039872 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-01 01:10:40.039876 | orchestrator | Sunday 01 March 2026 01:08:27 +0000 (0:00:03.816) 0:00:32.843 ********** 2026-03-01 01:10:40.039880 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:10:40.039884 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:10:40.039887 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:10:40.039891 | orchestrator | 2026-03-01 01:10:40.039895 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-01 01:10:40.039899 | orchestrator | Sunday 01 March 2026 01:08:30 +0000 (0:00:02.931) 0:00:35.774 ********** 2026-03-01 01:10:40.039902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:10:40.039906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:10:40.039910 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:10:40.039917 | orchestrator | 2026-03-01 01:10:40.039921 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-01 01:10:40.039925 | orchestrator | Sunday 01 March 2026 01:08:32 +0000 (0:00:01.653) 0:00:37.428 ********** 2026-03-01 01:10:40.039929 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:10:40.039932 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:10:40.039936 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:10:40.039941 | orchestrator | 2026-03-01 01:10:40.039948 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-01 01:10:40.039954 | orchestrator | Sunday 01 March 2026 01:08:33 +0000 (0:00:00.823) 0:00:38.251 ********** 2026-03-01 01:10:40.039961 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.039967 | orchestrator | 2026-03-01 01:10:40.039973 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-01 01:10:40.039979 | orchestrator | Sunday 01 March 2026 01:08:33 +0000 (0:00:00.157) 0:00:38.409 ********** 2026-03-01 01:10:40.039984 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.039990 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.039996 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040003 | orchestrator | 2026-03-01 01:10:40.040009 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-01 01:10:40.040016 | orchestrator | Sunday 01 March 2026 01:08:33 +0000 (0:00:00.256) 0:00:38.666 ********** 2026-03-01 01:10:40.040022 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:10:40.040029 | orchestrator | 2026-03-01 01:10:40.040035 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-01 01:10:40.040042 | orchestrator | Sunday 01 March 2026 01:08:34 +0000 (0:00:00.607) 0:00:39.273 ********** 2026-03-01 01:10:40.040050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040078 | orchestrator | 2026-03-01 01:10:40.040083 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-01 01:10:40.040087 | orchestrator | Sunday 01 March 2026 01:08:38 +0000 (0:00:04.427) 0:00:43.700 ********** 2026-03-01 01:10:40.040096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 01:10:40.040104 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 01:10:40.040114 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 01:10:40.040133 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040137 | orchestrator | 2026-03-01 01:10:40.040142 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-01 01:10:40.040147 | orchestrator | Sunday 01 March 2026 01:08:41 +0000 (0:00:03.100) 0:00:46.801 ********** 2026-03-01 01:10:40.040152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 01:10:40.040157 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 01:10:40.040172 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-01 01:10:40.040194 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040200 | orchestrator | 2026-03-01 01:10:40.040207 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-01 01:10:40.040214 | orchestrator | Sunday 01 March 2026 01:08:46 +0000 (0:00:04.647) 0:00:51.448 ********** 2026-03-01 01:10:40.040220 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040227 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040233 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040240 | orchestrator | 2026-03-01 01:10:40.040246 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-01 01:10:40.040250 | orchestrator | Sunday 01 March 2026 01:08:50 +0000 (0:00:03.818) 0:00:55.266 ********** 2026-03-01 01:10:40.040256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040278 | orchestrator | 2026-03-01 01:10:40.040281 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-01 01:10:40.040285 | orchestrator | Sunday 01 March 2026 01:08:54 +0000 (0:00:04.739) 0:01:00.006 ********** 2026-03-01 01:10:40.040289 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040293 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:10:40.040297 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:10:40.040300 | orchestrator | 2026-03-01 01:10:40.040304 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-01 01:10:40.040311 | orchestrator | Sunday 01 March 2026 01:09:00 +0000 (0:00:05.971) 0:01:05.978 ********** 2026-03-01 01:10:40.040314 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040318 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040322 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040325 | orchestrator | 2026-03-01 01:10:40.040329 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-01 01:10:40.040333 | orchestrator | Sunday 01 March 2026 01:09:06 +0000 (0:00:05.938) 0:01:11.917 ********** 2026-03-01 01:10:40.040337 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040341 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040345 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040348 | orchestrator | 2026-03-01 01:10:40.040352 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-01 01:10:40.040356 | orchestrator | Sunday 01 March 2026 01:09:10 +0000 (0:00:03.326) 0:01:15.244 ********** 2026-03-01 01:10:40.040360 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040365 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040369 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040373 | orchestrator | 2026-03-01 01:10:40.040377 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-01 01:10:40.040381 | orchestrator | Sunday 01 March 2026 01:09:13 +0000 (0:00:03.664) 0:01:18.908 ********** 2026-03-01 01:10:40.040385 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040388 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040392 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040396 | orchestrator | 2026-03-01 01:10:40.040400 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-01 01:10:40.040403 | orchestrator | Sunday 01 March 2026 01:09:19 +0000 (0:00:05.634) 0:01:24.543 ********** 2026-03-01 01:10:40.040407 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040411 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040415 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040418 | orchestrator | 2026-03-01 01:10:40.040422 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-01 01:10:40.040426 | orchestrator | Sunday 01 March 2026 01:09:19 +0000 (0:00:00.334) 0:01:24.877 ********** 2026-03-01 01:10:40.040430 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-01 01:10:40.040434 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040437 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-01 01:10:40.040441 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040445 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-01 01:10:40.040449 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040452 | orchestrator | 2026-03-01 01:10:40.040456 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-01 01:10:40.040460 | orchestrator | Sunday 01 March 2026 01:09:25 +0000 (0:00:05.381) 0:01:30.259 ********** 2026-03-01 01:10:40.040464 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:10:40.040468 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:10:40.040471 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040475 | orchestrator | 2026-03-01 01:10:40.040479 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-01 01:10:40.040483 | orchestrator | Sunday 01 March 2026 01:09:29 +0000 (0:00:04.282) 0:01:34.542 ********** 2026-03-01 01:10:40.040489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-01 01:10:40.040510 | orchestrator | 2026-03-01 01:10:40.040514 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-01 01:10:40.040518 | orchestrator | Sunday 01 March 2026 01:09:33 +0000 (0:00:03.621) 0:01:38.164 ********** 2026-03-01 01:10:40.040522 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:10:40.040525 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:10:40.040529 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:10:40.040533 | orchestrator | 2026-03-01 01:10:40.040538 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-01 01:10:40.040542 | orchestrator | Sunday 01 March 2026 01:09:33 +0000 (0:00:00.271) 0:01:38.435 ********** 2026-03-01 01:10:40.040546 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040550 | orchestrator | 2026-03-01 01:10:40.040554 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-01 01:10:40.040557 | orchestrator | Sunday 01 March 2026 01:09:35 +0000 (0:00:02.130) 0:01:40.566 ********** 2026-03-01 01:10:40.040561 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040565 | orchestrator | 2026-03-01 01:10:40.040569 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-01 01:10:40.040572 | orchestrator | Sunday 01 March 2026 01:09:38 +0000 (0:00:02.732) 0:01:43.299 ********** 2026-03-01 01:10:40.040576 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040580 | orchestrator | 2026-03-01 01:10:40.040584 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-01 01:10:40.040588 | orchestrator | Sunday 01 March 2026 01:09:40 +0000 (0:00:02.354) 0:01:45.653 ********** 2026-03-01 01:10:40.040591 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040595 | orchestrator | 2026-03-01 01:10:40.040599 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-01 01:10:40.040603 | orchestrator | Sunday 01 March 2026 01:10:06 +0000 (0:00:26.471) 0:02:12.125 ********** 2026-03-01 01:10:40.040606 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040610 | orchestrator | 2026-03-01 01:10:40.040614 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-01 01:10:40.040618 | orchestrator | Sunday 01 March 2026 01:10:09 +0000 (0:00:02.353) 0:02:14.478 ********** 2026-03-01 01:10:40.040621 | orchestrator | 2026-03-01 01:10:40.040627 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-01 01:10:40.040631 | orchestrator | Sunday 01 March 2026 01:10:09 +0000 (0:00:00.062) 0:02:14.541 ********** 2026-03-01 01:10:40.040635 | orchestrator | 2026-03-01 01:10:40.040638 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-01 01:10:40.040642 | orchestrator | Sunday 01 March 2026 01:10:09 +0000 (0:00:00.065) 0:02:14.607 ********** 2026-03-01 01:10:40.040646 | orchestrator | 2026-03-01 01:10:40.040650 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-01 01:10:40.040654 | orchestrator | Sunday 01 March 2026 01:10:09 +0000 (0:00:00.065) 0:02:14.672 ********** 2026-03-01 01:10:40.040657 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:10:40.040661 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:10:40.040665 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:10:40.040669 | orchestrator | 2026-03-01 01:10:40.040672 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:10:40.040677 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-01 01:10:40.040684 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-01 01:10:40.040688 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-01 01:10:40.040692 | orchestrator | 2026-03-01 01:10:40.040696 | orchestrator | 2026-03-01 01:10:40.040699 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:10:40.040703 | orchestrator | Sunday 01 March 2026 01:10:39 +0000 (0:00:30.056) 0:02:44.729 ********** 2026-03-01 01:10:40.040707 | orchestrator | =============================================================================== 2026-03-01 01:10:40.040711 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.06s 2026-03-01 01:10:40.040715 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.47s 2026-03-01 01:10:40.040718 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.08s 2026-03-01 01:10:40.040722 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.97s 2026-03-01 01:10:40.040726 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.94s 2026-03-01 01:10:40.040730 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.63s 2026-03-01 01:10:40.040733 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.38s 2026-03-01 01:10:40.040737 | orchestrator | glance : Copying over config.json files for services -------------------- 4.74s 2026-03-01 01:10:40.040741 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.65s 2026-03-01 01:10:40.040745 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.43s 2026-03-01 01:10:40.040748 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.28s 2026-03-01 01:10:40.040752 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.02s 2026-03-01 01:10:40.040756 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.82s 2026-03-01 01:10:40.040775 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.82s 2026-03-01 01:10:40.040782 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.66s 2026-03-01 01:10:40.040789 | orchestrator | glance : Check glance containers ---------------------------------------- 3.62s 2026-03-01 01:10:40.040794 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.56s 2026-03-01 01:10:40.040798 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.54s 2026-03-01 01:10:40.040802 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.39s 2026-03-01 01:10:40.040806 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.33s 2026-03-01 01:10:40.040812 | orchestrator | 2026-03-01 01:10:40 | INFO  | Task 40670421-e373-4fe1-9132-825b0fe330c1 is in state SUCCESS 2026-03-01 01:10:40.040816 | orchestrator | 2026-03-01 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:43.067948 | orchestrator | 2026-03-01 01:10:43 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:43.069916 | orchestrator | 2026-03-01 01:10:43 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:43.072528 | orchestrator | 2026-03-01 01:10:43 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:10:43.074436 | orchestrator | 2026-03-01 01:10:43 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:43.074502 | orchestrator | 2026-03-01 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:46.119883 | orchestrator | 2026-03-01 01:10:46 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:46.122150 | orchestrator | 2026-03-01 01:10:46 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:46.123534 | orchestrator | 2026-03-01 01:10:46 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:10:46.125154 | orchestrator | 2026-03-01 01:10:46 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:46.125216 | orchestrator | 2026-03-01 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:49.179099 | orchestrator | 2026-03-01 01:10:49 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:49.183177 | orchestrator | 2026-03-01 01:10:49 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:49.183375 | orchestrator | 2026-03-01 01:10:49 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:10:49.184929 | orchestrator | 2026-03-01 01:10:49 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:49.185291 | orchestrator | 2026-03-01 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:52.249594 | orchestrator | 2026-03-01 01:10:52 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:52.251873 | orchestrator | 2026-03-01 01:10:52 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:52.252382 | orchestrator | 2026-03-01 01:10:52 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:10:52.254138 | orchestrator | 2026-03-01 01:10:52 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:52.254401 | orchestrator | 2026-03-01 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:55.298742 | orchestrator | 2026-03-01 01:10:55 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:55.299199 | orchestrator | 2026-03-01 01:10:55 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:55.300416 | orchestrator | 2026-03-01 01:10:55 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:10:55.301198 | orchestrator | 2026-03-01 01:10:55 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:55.301474 | orchestrator | 2026-03-01 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:10:58.340791 | orchestrator | 2026-03-01 01:10:58 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:10:58.341440 | orchestrator | 2026-03-01 01:10:58 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:10:58.342857 | orchestrator | 2026-03-01 01:10:58 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:10:58.345092 | orchestrator | 2026-03-01 01:10:58 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:10:58.345318 | orchestrator | 2026-03-01 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:01.395643 | orchestrator | 2026-03-01 01:11:01 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:11:01.398568 | orchestrator | 2026-03-01 01:11:01 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:01.401024 | orchestrator | 2026-03-01 01:11:01 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:01.402776 | orchestrator | 2026-03-01 01:11:01 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:01.402983 | orchestrator | 2026-03-01 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:04.448079 | orchestrator | 2026-03-01 01:11:04 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:11:04.450400 | orchestrator | 2026-03-01 01:11:04 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:04.454174 | orchestrator | 2026-03-01 01:11:04 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:04.457571 | orchestrator | 2026-03-01 01:11:04 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:04.457626 | orchestrator | 2026-03-01 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:07.509503 | orchestrator | 2026-03-01 01:11:07 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:11:07.510469 | orchestrator | 2026-03-01 01:11:07 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:07.511346 | orchestrator | 2026-03-01 01:11:07 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:07.512368 | orchestrator | 2026-03-01 01:11:07 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:07.512395 | orchestrator | 2026-03-01 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:10.554011 | orchestrator | 2026-03-01 01:11:10 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:11:10.557193 | orchestrator | 2026-03-01 01:11:10 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:10.559073 | orchestrator | 2026-03-01 01:11:10 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:10.559478 | orchestrator | 2026-03-01 01:11:10 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:10.559770 | orchestrator | 2026-03-01 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:13.611301 | orchestrator | 2026-03-01 01:11:13 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state STARTED 2026-03-01 01:11:13.613245 | orchestrator | 2026-03-01 01:11:13 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:13.616200 | orchestrator | 2026-03-01 01:11:13 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:13.618875 | orchestrator | 2026-03-01 01:11:13 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:13.618979 | orchestrator | 2026-03-01 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:16.658920 | orchestrator | 2026-03-01 01:11:16 | INFO  | Task d4953eb7-859f-4da5-910c-889481dfba9b is in state SUCCESS 2026-03-01 01:11:16.659681 | orchestrator | 2026-03-01 01:11:16.659721 | orchestrator | 2026-03-01 01:11:16.659726 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:11:16.659730 | orchestrator | 2026-03-01 01:11:16.659734 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:11:16.659737 | orchestrator | Sunday 01 March 2026 01:08:10 +0000 (0:00:00.228) 0:00:00.228 ********** 2026-03-01 01:11:16.659741 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:11:16.659744 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:11:16.659748 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:11:16.659751 | orchestrator | 2026-03-01 01:11:16.659754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:11:16.659757 | orchestrator | Sunday 01 March 2026 01:08:10 +0000 (0:00:00.328) 0:00:00.556 ********** 2026-03-01 01:11:16.659771 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-01 01:11:16.659774 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-01 01:11:16.659778 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-01 01:11:16.659781 | orchestrator | 2026-03-01 01:11:16.659786 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-01 01:11:16.659791 | orchestrator | 2026-03-01 01:11:16.659797 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-01 01:11:16.659805 | orchestrator | Sunday 01 March 2026 01:08:10 +0000 (0:00:00.358) 0:00:00.915 ********** 2026-03-01 01:11:16.659810 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:11:16.659816 | orchestrator | 2026-03-01 01:11:16.659821 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-01 01:11:16.659826 | orchestrator | Sunday 01 March 2026 01:08:11 +0000 (0:00:00.489) 0:00:01.404 ********** 2026-03-01 01:11:16.659832 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-01 01:11:16.659837 | orchestrator | 2026-03-01 01:11:16.659842 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-01 01:11:16.659847 | orchestrator | Sunday 01 March 2026 01:08:14 +0000 (0:00:03.246) 0:00:04.650 ********** 2026-03-01 01:11:16.659860 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-01 01:11:16.659865 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-01 01:11:16.659911 | orchestrator | 2026-03-01 01:11:16.659918 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-01 01:11:16.659923 | orchestrator | Sunday 01 March 2026 01:08:20 +0000 (0:00:05.666) 0:00:10.317 ********** 2026-03-01 01:11:16.659928 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:11:16.659933 | orchestrator | 2026-03-01 01:11:16.659939 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-01 01:11:16.659944 | orchestrator | Sunday 01 March 2026 01:08:23 +0000 (0:00:03.127) 0:00:13.445 ********** 2026-03-01 01:11:16.659949 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-01 01:11:16.659960 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:11:16.659972 | orchestrator | 2026-03-01 01:11:16.660042 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-01 01:11:16.660048 | orchestrator | Sunday 01 March 2026 01:08:27 +0000 (0:00:04.521) 0:00:17.966 ********** 2026-03-01 01:11:16.660167 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:11:16.660174 | orchestrator | 2026-03-01 01:11:16.660177 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-01 01:11:16.660181 | orchestrator | Sunday 01 March 2026 01:08:31 +0000 (0:00:03.808) 0:00:21.774 ********** 2026-03-01 01:11:16.660184 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-01 01:11:16.660187 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-01 01:11:16.660191 | orchestrator | 2026-03-01 01:11:16.660194 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-01 01:11:16.660197 | orchestrator | Sunday 01 March 2026 01:08:39 +0000 (0:00:07.898) 0:00:29.673 ********** 2026-03-01 01:11:16.660203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.660220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.660224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.660231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660273 | orchestrator | 2026-03-01 01:11:16.660276 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-01 01:11:16.660280 | orchestrator | Sunday 01 March 2026 01:08:41 +0000 (0:00:02.213) 0:00:31.887 ********** 2026-03-01 01:11:16.660283 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.660286 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.660289 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.660292 | orchestrator | 2026-03-01 01:11:16.660295 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-01 01:11:16.660298 | orchestrator | Sunday 01 March 2026 01:08:42 +0000 (0:00:00.451) 0:00:32.338 ********** 2026-03-01 01:11:16.660302 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:11:16.660305 | orchestrator | 2026-03-01 01:11:16.660308 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-01 01:11:16.660311 | orchestrator | Sunday 01 March 2026 01:08:43 +0000 (0:00:01.791) 0:00:34.130 ********** 2026-03-01 01:11:16.660316 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-01 01:11:16.660320 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-01 01:11:16.660323 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-01 01:11:16.660326 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-01 01:11:16.660331 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-01 01:11:16.660337 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-01 01:11:16.660345 | orchestrator | 2026-03-01 01:11:16.660351 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-01 01:11:16.660356 | orchestrator | Sunday 01 March 2026 01:08:46 +0000 (0:00:02.549) 0:00:36.679 ********** 2026-03-01 01:11:16.660363 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-01 01:11:16.660372 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-01 01:11:16.660379 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-01 01:11:16.660391 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-01 01:11:16.660400 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-01 01:11:16.660406 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-01 01:11:16.660414 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-01 01:11:16.660420 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-01 01:11:16.660430 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-01 01:11:16.660438 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-01 01:11:16.660444 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-01 01:11:16.660452 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-01 01:11:16.660458 | orchestrator | 2026-03-01 01:11:16.660463 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-01 01:11:16.660469 | orchestrator | Sunday 01 March 2026 01:08:50 +0000 (0:00:03.854) 0:00:40.534 ********** 2026-03-01 01:11:16.660475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:11:16.660482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:11:16.660619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-01 01:11:16.660629 | orchestrator | 2026-03-01 01:11:16.660635 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-01 01:11:16.660639 | orchestrator | Sunday 01 March 2026 01:08:52 +0000 (0:00:02.145) 0:00:42.680 ********** 2026-03-01 01:11:16.660644 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-01 01:11:16.660649 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-01 01:11:16.660654 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-01 01:11:16.660659 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:11:16.660664 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:11:16.660669 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-01 01:11:16.660673 | orchestrator | 2026-03-01 01:11:16.660676 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-01 01:11:16.660679 | orchestrator | Sunday 01 March 2026 01:08:55 +0000 (0:00:03.308) 0:00:45.989 ********** 2026-03-01 01:11:16.660682 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-01 01:11:16.660686 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-01 01:11:16.660714 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-01 01:11:16.660718 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-01 01:11:16.660721 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-01 01:11:16.660724 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-01 01:11:16.660727 | orchestrator | 2026-03-01 01:11:16.660731 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-01 01:11:16.660734 | orchestrator | Sunday 01 March 2026 01:08:56 +0000 (0:00:01.178) 0:00:47.167 ********** 2026-03-01 01:11:16.660737 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.660741 | orchestrator | 2026-03-01 01:11:16.660744 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-01 01:11:16.660747 | orchestrator | Sunday 01 March 2026 01:08:57 +0000 (0:00:00.123) 0:00:47.291 ********** 2026-03-01 01:11:16.660750 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.660753 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.660756 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.660759 | orchestrator | 2026-03-01 01:11:16.660763 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-01 01:11:16.660766 | orchestrator | Sunday 01 March 2026 01:08:57 +0000 (0:00:00.326) 0:00:47.617 ********** 2026-03-01 01:11:16.660769 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:11:16.660772 | orchestrator | 2026-03-01 01:11:16.660776 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-01 01:11:16.660794 | orchestrator | Sunday 01 March 2026 01:08:58 +0000 (0:00:00.793) 0:00:48.410 ********** 2026-03-01 01:11:16.660798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.660808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.660812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.660815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.660858 | orchestrator | 2026-03-01 01:11:16.660861 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-01 01:11:16.660867 | orchestrator | Sunday 01 March 2026 01:09:02 +0000 (0:00:04.759) 0:00:53.169 ********** 2026-03-01 01:11:16.660873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.660876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660886 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.660894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.660904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660926 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.660931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.660936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660958 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.660963 | orchestrator | 2026-03-01 01:11:16.660968 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-01 01:11:16.660972 | orchestrator | Sunday 01 March 2026 01:09:04 +0000 (0:00:01.530) 0:00:54.700 ********** 2026-03-01 01:11:16.660981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.660987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.660993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661008 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.661014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.661019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661035 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.661040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.661068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661091 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.661096 | orchestrator | 2026-03-01 01:11:16.661101 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-01 01:11:16.661106 | orchestrator | Sunday 01 March 2026 01:09:06 +0000 (0:00:02.150) 0:00:56.850 ********** 2026-03-01 01:11:16.661111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661193 | orchestrator | 2026-03-01 01:11:16.661198 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-01 01:11:16.661204 | orchestrator | Sunday 01 March 2026 01:09:10 +0000 (0:00:04.047) 0:01:00.898 ********** 2026-03-01 01:11:16.661210 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-01 01:11:16.661215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-01 01:11:16.661220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-01 01:11:16.661229 | orchestrator | 2026-03-01 01:11:16.661235 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-01 01:11:16.661240 | orchestrator | Sunday 01 March 2026 01:09:12 +0000 (0:00:01.786) 0:01:02.685 ********** 2026-03-01 01:11:16.661249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661387 | orchestrator | 2026-03-01 01:11:16.661392 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-01 01:11:16.661397 | orchestrator | Sunday 01 March 2026 01:09:28 +0000 (0:00:16.204) 0:01:18.890 ********** 2026-03-01 01:11:16.661402 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661407 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:11:16.661413 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:11:16.661418 | orchestrator | 2026-03-01 01:11:16.661424 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-01 01:11:16.661433 | orchestrator | Sunday 01 March 2026 01:09:30 +0000 (0:00:01.443) 0:01:20.333 ********** 2026-03-01 01:11:16.661439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.661444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661468 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.661473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.661483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661502 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.661508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-01 01:11:16.661516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-01 01:11:16.661537 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.661541 | orchestrator | 2026-03-01 01:11:16.661545 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-01 01:11:16.661551 | orchestrator | Sunday 01 March 2026 01:09:31 +0000 (0:00:00.930) 0:01:21.264 ********** 2026-03-01 01:11:16.661560 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.661565 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.661570 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.661576 | orchestrator | 2026-03-01 01:11:16.661581 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-01 01:11:16.661585 | orchestrator | Sunday 01 March 2026 01:09:31 +0000 (0:00:00.325) 0:01:21.590 ********** 2026-03-01 01:11:16.661593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-01 01:11:16.661618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-01 01:11:16.661682 | orchestrator | 2026-03-01 01:11:16.661702 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-01 01:11:16.661708 | orchestrator | Sunday 01 March 2026 01:09:34 +0000 (0:00:03.191) 0:01:24.781 ********** 2026-03-01 01:11:16.661714 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.661720 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:11:16.661725 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:11:16.661730 | orchestrator | 2026-03-01 01:11:16.661735 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-01 01:11:16.661741 | orchestrator | Sunday 01 March 2026 01:09:35 +0000 (0:00:00.503) 0:01:25.284 ********** 2026-03-01 01:11:16.661746 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661750 | orchestrator | 2026-03-01 01:11:16.661755 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-01 01:11:16.661760 | orchestrator | Sunday 01 March 2026 01:09:37 +0000 (0:00:02.311) 0:01:27.595 ********** 2026-03-01 01:11:16.661766 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661771 | orchestrator | 2026-03-01 01:11:16.661776 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-01 01:11:16.661781 | orchestrator | Sunday 01 March 2026 01:09:39 +0000 (0:00:02.522) 0:01:30.118 ********** 2026-03-01 01:11:16.661787 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661792 | orchestrator | 2026-03-01 01:11:16.661798 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-01 01:11:16.661801 | orchestrator | Sunday 01 March 2026 01:09:58 +0000 (0:00:18.856) 0:01:48.974 ********** 2026-03-01 01:11:16.661804 | orchestrator | 2026-03-01 01:11:16.661808 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-01 01:11:16.661811 | orchestrator | Sunday 01 March 2026 01:09:58 +0000 (0:00:00.068) 0:01:49.042 ********** 2026-03-01 01:11:16.661814 | orchestrator | 2026-03-01 01:11:16.661817 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-01 01:11:16.661820 | orchestrator | Sunday 01 March 2026 01:09:58 +0000 (0:00:00.065) 0:01:49.108 ********** 2026-03-01 01:11:16.661824 | orchestrator | 2026-03-01 01:11:16.661827 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-01 01:11:16.661830 | orchestrator | Sunday 01 March 2026 01:09:58 +0000 (0:00:00.066) 0:01:49.174 ********** 2026-03-01 01:11:16.661833 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661836 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:11:16.661839 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:11:16.661842 | orchestrator | 2026-03-01 01:11:16.661845 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-01 01:11:16.661848 | orchestrator | Sunday 01 March 2026 01:10:28 +0000 (0:00:29.093) 0:02:18.268 ********** 2026-03-01 01:11:16.661851 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661855 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:11:16.661858 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:11:16.661861 | orchestrator | 2026-03-01 01:11:16.661864 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-01 01:11:16.661867 | orchestrator | Sunday 01 March 2026 01:10:38 +0000 (0:00:10.707) 0:02:28.976 ********** 2026-03-01 01:11:16.661870 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661874 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:11:16.661877 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:11:16.661880 | orchestrator | 2026-03-01 01:11:16.661883 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-01 01:11:16.661886 | orchestrator | Sunday 01 March 2026 01:11:06 +0000 (0:00:27.978) 0:02:56.954 ********** 2026-03-01 01:11:16.661889 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:11:16.661892 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:11:16.661896 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:11:16.661903 | orchestrator | 2026-03-01 01:11:16.661906 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-01 01:11:16.661913 | orchestrator | Sunday 01 March 2026 01:11:13 +0000 (0:00:06.293) 0:03:03.247 ********** 2026-03-01 01:11:16.661916 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:11:16.661919 | orchestrator | 2026-03-01 01:11:16.661922 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:11:16.661926 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-01 01:11:16.661929 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:11:16.661933 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:11:16.661936 | orchestrator | 2026-03-01 01:11:16.661939 | orchestrator | 2026-03-01 01:11:16.661942 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:11:16.661945 | orchestrator | Sunday 01 March 2026 01:11:13 +0000 (0:00:00.276) 0:03:03.524 ********** 2026-03-01 01:11:16.661948 | orchestrator | =============================================================================== 2026-03-01 01:11:16.661951 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.09s 2026-03-01 01:11:16.661954 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.98s 2026-03-01 01:11:16.661957 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.86s 2026-03-01 01:11:16.661960 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.20s 2026-03-01 01:11:16.661963 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.71s 2026-03-01 01:11:16.661966 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.90s 2026-03-01 01:11:16.661969 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.29s 2026-03-01 01:11:16.661973 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.67s 2026-03-01 01:11:16.661978 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.76s 2026-03-01 01:11:16.661981 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.52s 2026-03-01 01:11:16.661984 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.05s 2026-03-01 01:11:16.661987 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.85s 2026-03-01 01:11:16.661991 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.81s 2026-03-01 01:11:16.661994 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.31s 2026-03-01 01:11:16.661997 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.25s 2026-03-01 01:11:16.662000 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.19s 2026-03-01 01:11:16.662003 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.13s 2026-03-01 01:11:16.662006 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.55s 2026-03-01 01:11:16.662009 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.52s 2026-03-01 01:11:16.662036 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.31s 2026-03-01 01:11:16.662042 | orchestrator | 2026-03-01 01:11:16 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:16.662681 | orchestrator | 2026-03-01 01:11:16 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:16.664294 | orchestrator | 2026-03-01 01:11:16 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:16.664320 | orchestrator | 2026-03-01 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:19.703774 | orchestrator | 2026-03-01 01:11:19 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:19.704564 | orchestrator | 2026-03-01 01:11:19 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:19.706298 | orchestrator | 2026-03-01 01:11:19 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:19.706339 | orchestrator | 2026-03-01 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:22.753237 | orchestrator | 2026-03-01 01:11:22 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:22.755455 | orchestrator | 2026-03-01 01:11:22 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:22.757189 | orchestrator | 2026-03-01 01:11:22 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:22.757241 | orchestrator | 2026-03-01 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:25.806627 | orchestrator | 2026-03-01 01:11:25 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:25.808973 | orchestrator | 2026-03-01 01:11:25 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:25.811598 | orchestrator | 2026-03-01 01:11:25 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:25.811791 | orchestrator | 2026-03-01 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:28.865889 | orchestrator | 2026-03-01 01:11:28 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:28.867831 | orchestrator | 2026-03-01 01:11:28 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:28.870535 | orchestrator | 2026-03-01 01:11:28 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:28.870587 | orchestrator | 2026-03-01 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:31.910280 | orchestrator | 2026-03-01 01:11:31 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:31.911229 | orchestrator | 2026-03-01 01:11:31 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:31.913121 | orchestrator | 2026-03-01 01:11:31 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:31.913161 | orchestrator | 2026-03-01 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:34.957982 | orchestrator | 2026-03-01 01:11:34 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:34.960592 | orchestrator | 2026-03-01 01:11:34 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:34.962169 | orchestrator | 2026-03-01 01:11:34 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:34.962227 | orchestrator | 2026-03-01 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:38.026367 | orchestrator | 2026-03-01 01:11:38 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:38.026419 | orchestrator | 2026-03-01 01:11:38 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:38.026426 | orchestrator | 2026-03-01 01:11:38 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:38.026432 | orchestrator | 2026-03-01 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:41.072461 | orchestrator | 2026-03-01 01:11:41 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:41.076521 | orchestrator | 2026-03-01 01:11:41 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:41.078745 | orchestrator | 2026-03-01 01:11:41 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:41.078819 | orchestrator | 2026-03-01 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:44.112410 | orchestrator | 2026-03-01 01:11:44 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:44.112725 | orchestrator | 2026-03-01 01:11:44 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:44.114077 | orchestrator | 2026-03-01 01:11:44 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:44.114176 | orchestrator | 2026-03-01 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:47.155855 | orchestrator | 2026-03-01 01:11:47 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:47.156267 | orchestrator | 2026-03-01 01:11:47 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:47.157682 | orchestrator | 2026-03-01 01:11:47 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:47.157734 | orchestrator | 2026-03-01 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:50.202855 | orchestrator | 2026-03-01 01:11:50 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:50.204305 | orchestrator | 2026-03-01 01:11:50 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:50.204389 | orchestrator | 2026-03-01 01:11:50 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:50.204396 | orchestrator | 2026-03-01 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:53.265780 | orchestrator | 2026-03-01 01:11:53 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:53.267346 | orchestrator | 2026-03-01 01:11:53 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:53.269189 | orchestrator | 2026-03-01 01:11:53 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:53.269239 | orchestrator | 2026-03-01 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:56.319940 | orchestrator | 2026-03-01 01:11:56 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:56.321639 | orchestrator | 2026-03-01 01:11:56 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:56.324123 | orchestrator | 2026-03-01 01:11:56 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:56.324164 | orchestrator | 2026-03-01 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:11:59.355727 | orchestrator | 2026-03-01 01:11:59 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:11:59.358580 | orchestrator | 2026-03-01 01:11:59 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:11:59.359179 | orchestrator | 2026-03-01 01:11:59 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:11:59.359210 | orchestrator | 2026-03-01 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:02.424318 | orchestrator | 2026-03-01 01:12:02 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:02.429946 | orchestrator | 2026-03-01 01:12:02 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:02.434905 | orchestrator | 2026-03-01 01:12:02 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:02.435006 | orchestrator | 2026-03-01 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:05.479733 | orchestrator | 2026-03-01 01:12:05 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:05.480987 | orchestrator | 2026-03-01 01:12:05 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:05.482496 | orchestrator | 2026-03-01 01:12:05 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:05.482539 | orchestrator | 2026-03-01 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:08.532942 | orchestrator | 2026-03-01 01:12:08 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:08.535708 | orchestrator | 2026-03-01 01:12:08 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:08.538655 | orchestrator | 2026-03-01 01:12:08 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:08.538719 | orchestrator | 2026-03-01 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:11.582322 | orchestrator | 2026-03-01 01:12:11 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:11.583663 | orchestrator | 2026-03-01 01:12:11 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:11.585558 | orchestrator | 2026-03-01 01:12:11 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:11.585634 | orchestrator | 2026-03-01 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:14.633420 | orchestrator | 2026-03-01 01:12:14 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:14.636143 | orchestrator | 2026-03-01 01:12:14 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:14.637948 | orchestrator | 2026-03-01 01:12:14 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:14.637988 | orchestrator | 2026-03-01 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:17.680441 | orchestrator | 2026-03-01 01:12:17 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:17.683579 | orchestrator | 2026-03-01 01:12:17 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:17.684999 | orchestrator | 2026-03-01 01:12:17 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:17.685038 | orchestrator | 2026-03-01 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:20.734929 | orchestrator | 2026-03-01 01:12:20 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:20.737601 | orchestrator | 2026-03-01 01:12:20 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:20.739522 | orchestrator | 2026-03-01 01:12:20 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:20.739800 | orchestrator | 2026-03-01 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:23.786820 | orchestrator | 2026-03-01 01:12:23 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:23.788850 | orchestrator | 2026-03-01 01:12:23 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:23.790672 | orchestrator | 2026-03-01 01:12:23 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:23.790897 | orchestrator | 2026-03-01 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:26.828582 | orchestrator | 2026-03-01 01:12:26 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:26.829285 | orchestrator | 2026-03-01 01:12:26 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:26.830138 | orchestrator | 2026-03-01 01:12:26 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:26.830317 | orchestrator | 2026-03-01 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:29.870338 | orchestrator | 2026-03-01 01:12:29 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state STARTED 2026-03-01 01:12:29.873082 | orchestrator | 2026-03-01 01:12:29 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:29.874438 | orchestrator | 2026-03-01 01:12:29 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:29.874695 | orchestrator | 2026-03-01 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:32.933803 | orchestrator | 2026-03-01 01:12:32 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:32.935039 | orchestrator | 2026-03-01 01:12:32 | INFO  | Task cb2efa53-86d3-4581-b628-8bfe2b550497 is in state SUCCESS 2026-03-01 01:12:32.936764 | orchestrator | 2026-03-01 01:12:32 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:32.938607 | orchestrator | 2026-03-01 01:12:32 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:32.939950 | orchestrator | 2026-03-01 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:35.987932 | orchestrator | 2026-03-01 01:12:35 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:35.989744 | orchestrator | 2026-03-01 01:12:35 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:35.991210 | orchestrator | 2026-03-01 01:12:35 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:35.991264 | orchestrator | 2026-03-01 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:39.034742 | orchestrator | 2026-03-01 01:12:39 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:39.037395 | orchestrator | 2026-03-01 01:12:39 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:39.044068 | orchestrator | 2026-03-01 01:12:39 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:39.044121 | orchestrator | 2026-03-01 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:42.077497 | orchestrator | 2026-03-01 01:12:42 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:42.078760 | orchestrator | 2026-03-01 01:12:42 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:42.079821 | orchestrator | 2026-03-01 01:12:42 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:42.079865 | orchestrator | 2026-03-01 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:45.113010 | orchestrator | 2026-03-01 01:12:45 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:45.113094 | orchestrator | 2026-03-01 01:12:45 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:45.113700 | orchestrator | 2026-03-01 01:12:45 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:45.113733 | orchestrator | 2026-03-01 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:48.162458 | orchestrator | 2026-03-01 01:12:48 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:48.164068 | orchestrator | 2026-03-01 01:12:48 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:48.164995 | orchestrator | 2026-03-01 01:12:48 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:48.165188 | orchestrator | 2026-03-01 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:51.199093 | orchestrator | 2026-03-01 01:12:51 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:51.201256 | orchestrator | 2026-03-01 01:12:51 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state STARTED 2026-03-01 01:12:51.202726 | orchestrator | 2026-03-01 01:12:51 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:51.202755 | orchestrator | 2026-03-01 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:54.239998 | orchestrator | 2026-03-01 01:12:54 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:54.246117 | orchestrator | 2026-03-01 01:12:54.246175 | orchestrator | 2026-03-01 01:12:54.246185 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:12:54.246193 | orchestrator | 2026-03-01 01:12:54.246199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:12:54.246206 | orchestrator | Sunday 01 March 2026 01:09:52 +0000 (0:00:00.169) 0:00:00.169 ********** 2026-03-01 01:12:54.246213 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:12:54.246220 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:12:54.246227 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:12:54.246233 | orchestrator | 2026-03-01 01:12:54.246239 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:12:54.246245 | orchestrator | Sunday 01 March 2026 01:09:52 +0000 (0:00:00.312) 0:00:00.482 ********** 2026-03-01 01:12:54.246252 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-01 01:12:54.246259 | orchestrator | [0;32mok: [testbed-node-1] => (item=enable_nova_True) 2026-03-01 01:12:54.246265 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-01 01:12:54.246271 | orchestrator | 2026-03-01 01:12:54.246277 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-01 01:12:54.246284 | orchestrator | 2026-03-01 01:12:54.246290 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-01 01:12:54.246297 | orchestrator | Sunday 01 March 2026 01:09:53 +0000 (0:00:00.566) 0:00:01.048 ********** 2026-03-01 01:12:54.246303 | orchestrator | 2026-03-01 01:12:54.246310 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-01 01:12:54.246316 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:12:54.246322 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:12:54.246328 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:12:54.246334 | orchestrator | 2026-03-01 01:12:54.246384 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:12:54.246389 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:12:54.246394 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:12:54.246398 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:12:54.246416 | orchestrator | 2026-03-01 01:12:54.246420 | orchestrator | 2026-03-01 01:12:54.246424 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:12:54.246428 | orchestrator | Sunday 01 March 2026 01:12:29 +0000 (0:02:36.698) 0:02:37.746 ********** 2026-03-01 01:12:54.246432 | orchestrator | =============================================================================== 2026-03-01 01:12:54.246436 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 156.70s 2026-03-01 01:12:54.246439 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-01 01:12:54.246443 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-01 01:12:54.246447 | orchestrator | 2026-03-01 01:12:54.246451 | orchestrator | 2026-03-01 01:12:54.246454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:12:54.246458 | orchestrator | 2026-03-01 01:12:54.246462 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:12:54.246466 | orchestrator | Sunday 01 March 2026 01:10:45 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-03-01 01:12:54.246469 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:12:54.246473 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:12:54.246477 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:12:54.246480 | orchestrator | 2026-03-01 01:12:54.246499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:12:54.246542 | orchestrator | Sunday 01 March 2026 01:10:46 +0000 (0:00:00.297) 0:00:00.560 ********** 2026-03-01 01:12:54.246547 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-01 01:12:54.246551 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-01 01:12:54.246555 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-01 01:12:54.246559 | orchestrator | 2026-03-01 01:12:54.246563 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-01 01:12:54.246566 | orchestrator | 2026-03-01 01:12:54.246570 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-01 01:12:54.246574 | orchestrator | Sunday 01 March 2026 01:10:46 +0000 (0:00:00.417) 0:00:00.978 ********** 2026-03-01 01:12:54.246578 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:12:54.246581 | orchestrator | 2026-03-01 01:12:54.246585 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-01 01:12:54.246589 | orchestrator | Sunday 01 March 2026 01:10:47 +0000 (0:00:00.490) 0:00:01.469 ********** 2026-03-01 01:12:54.246595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246623 | orchestrator | 2026-03-01 01:12:54.246626 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-01 01:12:54.246630 | orchestrator | Sunday 01 March 2026 01:10:47 +0000 (0:00:00.738) 0:00:02.208 ********** 2026-03-01 01:12:54.246634 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-01 01:12:54.246638 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-01 01:12:54.246642 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:12:54.246664 | orchestrator | 2026-03-01 01:12:54.246668 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-01 01:12:54.246672 | orchestrator | Sunday 01 March 2026 01:10:48 +0000 (0:00:00.922) 0:00:03.130 ********** 2026-03-01 01:12:54.246676 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:12:54.246680 | orchestrator | 2026-03-01 01:12:54.246684 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-01 01:12:54.246687 | orchestrator | Sunday 01 March 2026 01:10:49 +0000 (0:00:00.683) 0:00:03.814 ********** 2026-03-01 01:12:54.246691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246703 | orchestrator | 2026-03-01 01:12:54.246710 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-01 01:12:54.246716 | orchestrator | Sunday 01 March 2026 01:10:50 +0000 (0:00:01.289) 0:00:05.103 ********** 2026-03-01 01:12:54.246720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 01:12:54.246724 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.246728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 01:12:54.246732 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.246736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 01:12:54.246740 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.246744 | orchestrator | 2026-03-01 01:12:54.246748 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-01 01:12:54.246752 | orchestrator | Sunday 01 March 2026 01:10:51 +0000 (0:00:00.384) 0:00:05.488 ********** 2026-03-01 01:12:54.246755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 01:12:54.246759 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.246763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 01:12:54.246769 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.246777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-01 01:12:54.246781 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.246785 | orchestrator | 2026-03-01 01:12:54.246788 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-01 01:12:54.246792 | orchestrator | Sunday 01 March 2026 01:10:51 +0000 (0:00:00.768) 0:00:06.257 ********** 2026-03-01 01:12:54.246796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246808 | orchestrator | 2026-03-01 01:12:54.246812 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-01 01:12:54.246816 | orchestrator | Sunday 01 March 2026 01:10:53 +0000 (0:00:01.334) 0:00:07.592 ********** 2026-03-01 01:12:54.246820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.246837 | orchestrator | 2026-03-01 01:12:54.246841 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-01 01:12:54.246844 | orchestrator | Sunday 01 March 2026 01:10:54 +0000 (0:00:01.471) 0:00:09.063 ********** 2026-03-01 01:12:54.246848 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.246852 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.246856 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.246859 | orchestrator | 2026-03-01 01:12:54.246863 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-01 01:12:54.246867 | orchestrator | Sunday 01 March 2026 01:10:55 +0000 (0:00:00.512) 0:00:09.576 ********** 2026-03-01 01:12:54.246871 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-01 01:12:54.246875 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-01 01:12:54.246878 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-01 01:12:54.246882 | orchestrator | 2026-03-01 01:12:54.246886 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-01 01:12:54.246889 | orchestrator | Sunday 01 March 2026 01:10:56 +0000 (0:00:01.396) 0:00:10.973 ********** 2026-03-01 01:12:54.246893 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-01 01:12:54.246897 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-01 01:12:54.246901 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-01 01:12:54.246904 | orchestrator | 2026-03-01 01:12:54.246908 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-01 01:12:54.246926 | orchestrator | Sunday 01 March 2026 01:10:57 +0000 (0:00:01.337) 0:00:12.310 ********** 2026-03-01 01:12:54.246930 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:12:54.246934 | orchestrator | 2026-03-01 01:12:54.246937 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-01 01:12:54.246941 | orchestrator | Sunday 01 March 2026 01:10:58 +0000 (0:00:00.999) 0:00:13.310 ********** 2026-03-01 01:12:54.246945 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-01 01:12:54.246948 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-01 01:12:54.246957 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:12:54.246960 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:12:54.246964 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:12:54.246968 | orchestrator | 2026-03-01 01:12:54.246971 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-01 01:12:54.246975 | orchestrator | Sunday 01 March 2026 01:10:59 +0000 (0:00:00.735) 0:00:14.046 ********** 2026-03-01 01:12:54.246979 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.246983 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.246986 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.246990 | orchestrator | 2026-03-01 01:12:54.246994 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-01 01:12:54.246997 | orchestrator | Sunday 01 March 2026 01:11:00 +0000 (0:00:00.554) 0:00:14.600 ********** 2026-03-01 01:12:54.247001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1333168, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.562387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1333168, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.562387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1333168, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.562387, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1333205, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5692384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1333205, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5692384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1333205, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5692384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1333258, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5785306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1333258, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5785306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1333258, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5785306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1333198, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5673513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1333198, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5673513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1333198, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5673513, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1333266, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.581697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1333266, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.581697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1333266, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.581697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1333179, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5638828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1333179, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5638828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1333179, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5638828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1333232, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.572586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1333232, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.572586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1333232, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5725862026-03-01 01:12:54 | INFO  | Task 9565231c-1c50-47f1-96c8-721a4a1567e7 is in state SUCCESS 2026-03-01 01:12:54.247371 | orchestrator | , 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1333248, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5762248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1333248, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5762248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1333164, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5610826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1333248, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5762248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1333164, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5610826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1333173, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5630345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1333164, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5610826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1333173, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5630345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1333202, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5679674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1333173, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5630345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1333202, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5679674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1333240, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5732076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1333202, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5679674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1333240, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5732076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1333256, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.577625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1333240, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5732076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1333256, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.577625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1333188, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5667129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1333256, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.577625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1333188, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5667129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1333245, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5754473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1333188, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5667129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1333277, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5818832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1333245, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5754473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1333245, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5754473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1333277, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5818832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1333238, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5732076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1333277, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5818832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1333238, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5732076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1333228, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5718021, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1333238, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5732076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1333228, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5718021, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1333220, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.570552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1333228, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5718021, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1333220, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.570552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1333242, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5742662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1333220, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.570552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1333242, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5742662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1333214, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.570031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1333242, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5742662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1333214, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.570031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1333252, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5772479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1333214, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.570031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1333252, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5772479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1333185, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5653203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1333252, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5772479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1333185, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5653203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1333417, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6128345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1333185, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5653203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1333417, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6128345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1333312, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5945945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1333417, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6128345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1333292, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5873098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1333312, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5945945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1333312, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5945945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1333338, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5964186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1333292, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5873098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1333292, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5873098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1333284, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5842385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1333338, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5964186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1333338, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5964186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1333373, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6060066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1333284, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5842385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1333284, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5842385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1333340, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6030893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1333373, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6060066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1333373, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6060066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1333379, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6068432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1333340, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6030893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1333340, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6030893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1333410, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6113484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1333379, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6068432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1333379, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6068432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1333369, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6053603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1333410, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6113484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1333332, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5956705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1333410, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6113484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1333369, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6053603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1333308, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5909812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1333369, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6053603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1333332, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5956705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1333328, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5952098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1333332, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5956705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1333308, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5909812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1333297, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5900052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1333328, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5952098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1333308, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5909812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1333335, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5959475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1333297, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5900052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1333328, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5952098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1333396, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6110332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1333335, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5959475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1333297, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5900052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1333389, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6089733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1333396, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6110332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1333335, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5959475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1333286, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5850406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1333389, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6089733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.247995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1333396, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6110332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1333288, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5867667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1333286, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5850406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1333389, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6089733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1333364, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6045117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1333288, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5867667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1333286, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5850406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1333382, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6075816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1333364, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6045117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1333288, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.5867667, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1333382, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6075816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1333364, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6045117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1333382, 'dev': 112, 'nlink': 1, 'atime': 1772323343.0, 'mtime': 1772323343.0, 'ctime': 1772324499.6075816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-01 01:12:54.248101 | orchestrator | 2026-03-01 01:12:54.248106 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-01 01:12:54.248110 | orchestrator | Sunday 01 March 2026 01:11:38 +0000 (0:00:38.830) 0:00:53.431 ********** 2026-03-01 01:12:54.248115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.248122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.248129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-01 01:12:54.248133 | orchestrator | 2026-03-01 01:12:54.248137 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-01 01:12:54.248142 | orchestrator | Sunday 01 March 2026 01:11:39 +0000 (0:00:00.935) 0:00:54.366 ********** 2026-03-01 01:12:54.248146 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:12:54.248151 | orchestrator | 2026-03-01 01:12:54.248155 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-01 01:12:54.248159 | orchestrator | Sunday 01 March 2026 01:11:41 +0000 (0:00:02.044) 0:00:56.410 ********** 2026-03-01 01:12:54.248164 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:12:54.248168 | orchestrator | 2026-03-01 01:12:54.248172 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-01 01:12:54.248177 | orchestrator | Sunday 01 March 2026 01:11:44 +0000 (0:00:02.278) 0:00:58.689 ********** 2026-03-01 01:12:54.248181 | orchestrator | 2026-03-01 01:12:54.248185 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-01 01:12:54.248190 | orchestrator | Sunday 01 March 2026 01:11:44 +0000 (0:00:00.064) 0:00:58.754 ********** 2026-03-01 01:12:54.248196 | orchestrator | 2026-03-01 01:12:54.248201 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-01 01:12:54.248205 | orchestrator | Sunday 01 March 2026 01:11:44 +0000 (0:00:00.226) 0:00:58.980 ********** 2026-03-01 01:12:54.248209 | orchestrator | 2026-03-01 01:12:54.248214 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-01 01:12:54.248218 | orchestrator | Sunday 01 March 2026 01:11:44 +0000 (0:00:00.074) 0:00:59.054 ********** 2026-03-01 01:12:54.248222 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.248227 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.248231 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:12:54.248235 | orchestrator | 2026-03-01 01:12:54.248240 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-01 01:12:54.248244 | orchestrator | Sunday 01 March 2026 01:11:46 +0000 (0:00:02.388) 0:01:01.443 ********** 2026-03-01 01:12:54.248248 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.248253 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.248257 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-01 01:12:54.248262 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-01 01:12:54.248267 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:12:54.248271 | orchestrator | 2026-03-01 01:12:54.248276 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-01 01:12:54.248281 | orchestrator | Sunday 01 March 2026 01:12:13 +0000 (0:00:26.357) 0:01:27.800 ********** 2026-03-01 01:12:54.248285 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.248289 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:12:54.248294 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:12:54.248298 | orchestrator | 2026-03-01 01:12:54.248302 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-01 01:12:54.248307 | orchestrator | Sunday 01 March 2026 01:12:48 +0000 (0:00:35.378) 0:02:03.179 ********** 2026-03-01 01:12:54.248311 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:12:54.248316 | orchestrator | 2026-03-01 01:12:54.248320 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-01 01:12:54.248324 | orchestrator | Sunday 01 March 2026 01:12:50 +0000 (0:00:01.938) 0:02:05.117 ********** 2026-03-01 01:12:54.248329 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.248333 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:12:54.248338 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:12:54.248342 | orchestrator | 2026-03-01 01:12:54.248347 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-01 01:12:54.248350 | orchestrator | Sunday 01 March 2026 01:12:51 +0000 (0:00:00.387) 0:02:05.505 ********** 2026-03-01 01:12:54.248355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-01 01:12:54.248361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-01 01:12:54.248366 | orchestrator | 2026-03-01 01:12:54.248370 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-01 01:12:54.248374 | orchestrator | Sunday 01 March 2026 01:12:53 +0000 (0:00:02.076) 0:02:07.582 ********** 2026-03-01 01:12:54.248377 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:12:54.248381 | orchestrator | 2026-03-01 01:12:54.248385 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:12:54.248393 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:12:54.248398 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:12:54.248402 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:12:54.248405 | orchestrator | 2026-03-01 01:12:54.248409 | orchestrator | 2026-03-01 01:12:54.248413 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:12:54.248417 | orchestrator | Sunday 01 March 2026 01:12:53 +0000 (0:00:00.227) 0:02:07.809 ********** 2026-03-01 01:12:54.248420 | orchestrator | =============================================================================== 2026-03-01 01:12:54.248424 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.83s 2026-03-01 01:12:54.248428 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.38s 2026-03-01 01:12:54.248432 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.36s 2026-03-01 01:12:54.248435 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.39s 2026-03-01 01:12:54.248439 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.28s 2026-03-01 01:12:54.248443 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.08s 2026-03-01 01:12:54.248446 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.04s 2026-03-01 01:12:54.248450 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.94s 2026-03-01 01:12:54.248454 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.47s 2026-03-01 01:12:54.248458 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.40s 2026-03-01 01:12:54.248461 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2026-03-01 01:12:54.248465 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-03-01 01:12:54.248469 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-03-01 01:12:54.248472 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.00s 2026-03-01 01:12:54.248476 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.94s 2026-03-01 01:12:54.248480 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.92s 2026-03-01 01:12:54.248493 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.77s 2026-03-01 01:12:54.248500 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.74s 2026-03-01 01:12:54.248505 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-03-01 01:12:54.248511 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2026-03-01 01:12:54.248516 | orchestrator | 2026-03-01 01:12:54 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:54.248522 | orchestrator | 2026-03-01 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:12:57.285788 | orchestrator | 2026-03-01 01:12:57 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:12:57.288255 | orchestrator | 2026-03-01 01:12:57 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:12:57.288336 | orchestrator | 2026-03-01 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:00.334702 | orchestrator | 2026-03-01 01:13:00 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:00.337104 | orchestrator | 2026-03-01 01:13:00 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:00.337213 | orchestrator | 2026-03-01 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:03.381857 | orchestrator | 2026-03-01 01:13:03 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:03.383049 | orchestrator | 2026-03-01 01:13:03 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:03.383109 | orchestrator | 2026-03-01 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:06.418217 | orchestrator | 2026-03-01 01:13:06 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:06.422164 | orchestrator | 2026-03-01 01:13:06 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:06.422220 | orchestrator | 2026-03-01 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:09.466797 | orchestrator | 2026-03-01 01:13:09 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:09.468044 | orchestrator | 2026-03-01 01:13:09 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:09.468095 | orchestrator | 2026-03-01 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:12.500774 | orchestrator | 2026-03-01 01:13:12 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:12.501717 | orchestrator | 2026-03-01 01:13:12 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:12.501747 | orchestrator | 2026-03-01 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:15.539078 | orchestrator | 2026-03-01 01:13:15 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:15.539361 | orchestrator | 2026-03-01 01:13:15 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:15.539380 | orchestrator | 2026-03-01 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:18.579563 | orchestrator | 2026-03-01 01:13:18 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:18.579617 | orchestrator | 2026-03-01 01:13:18 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:18.579623 | orchestrator | 2026-03-01 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:21.612904 | orchestrator | 2026-03-01 01:13:21 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:21.613254 | orchestrator | 2026-03-01 01:13:21 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:21.613269 | orchestrator | 2026-03-01 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:24.640648 | orchestrator | 2026-03-01 01:13:24 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:24.643097 | orchestrator | 2026-03-01 01:13:24 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:24.643167 | orchestrator | 2026-03-01 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:27.692597 | orchestrator | 2026-03-01 01:13:27 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:27.693982 | orchestrator | 2026-03-01 01:13:27 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:27.694045 | orchestrator | 2026-03-01 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:30.733681 | orchestrator | 2026-03-01 01:13:30 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:30.735968 | orchestrator | 2026-03-01 01:13:30 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:30.736054 | orchestrator | 2026-03-01 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:33.762994 | orchestrator | 2026-03-01 01:13:33 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:33.763820 | orchestrator | 2026-03-01 01:13:33 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:33.763869 | orchestrator | 2026-03-01 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:36.805271 | orchestrator | 2026-03-01 01:13:36 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:36.806336 | orchestrator | 2026-03-01 01:13:36 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:36.806380 | orchestrator | 2026-03-01 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:39.843769 | orchestrator | 2026-03-01 01:13:39 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:39.844172 | orchestrator | 2026-03-01 01:13:39 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:39.845816 | orchestrator | 2026-03-01 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:42.884944 | orchestrator | 2026-03-01 01:13:42 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:42.888837 | orchestrator | 2026-03-01 01:13:42 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:42.890986 | orchestrator | 2026-03-01 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:45.932903 | orchestrator | 2026-03-01 01:13:45 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:45.934924 | orchestrator | 2026-03-01 01:13:45 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:45.935059 | orchestrator | 2026-03-01 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:48.988828 | orchestrator | 2026-03-01 01:13:48 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:48.989784 | orchestrator | 2026-03-01 01:13:48 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:48.989854 | orchestrator | 2026-03-01 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:52.048140 | orchestrator | 2026-03-01 01:13:52 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:52.050319 | orchestrator | 2026-03-01 01:13:52 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:52.050393 | orchestrator | 2026-03-01 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:55.094145 | orchestrator | 2026-03-01 01:13:55 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:55.095259 | orchestrator | 2026-03-01 01:13:55 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:55.095294 | orchestrator | 2026-03-01 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:13:58.140595 | orchestrator | 2026-03-01 01:13:58 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:13:58.142922 | orchestrator | 2026-03-01 01:13:58 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:13:58.142968 | orchestrator | 2026-03-01 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:01.212018 | orchestrator | 2026-03-01 01:14:01 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:01.213562 | orchestrator | 2026-03-01 01:14:01 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:01.213602 | orchestrator | 2026-03-01 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:04.255156 | orchestrator | 2026-03-01 01:14:04 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:04.259076 | orchestrator | 2026-03-01 01:14:04 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:04.259135 | orchestrator | 2026-03-01 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:07.303622 | orchestrator | 2026-03-01 01:14:07 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:07.305623 | orchestrator | 2026-03-01 01:14:07 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:07.305661 | orchestrator | 2026-03-01 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:10.352925 | orchestrator | 2026-03-01 01:14:10 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:10.354590 | orchestrator | 2026-03-01 01:14:10 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:10.354670 | orchestrator | 2026-03-01 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:13.401121 | orchestrator | 2026-03-01 01:14:13 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:13.402806 | orchestrator | 2026-03-01 01:14:13 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:13.402916 | orchestrator | 2026-03-01 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:16.476035 | orchestrator | 2026-03-01 01:14:16 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:16.477195 | orchestrator | 2026-03-01 01:14:16 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:16.477502 | orchestrator | 2026-03-01 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:19.519994 | orchestrator | 2026-03-01 01:14:19 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:19.520101 | orchestrator | 2026-03-01 01:14:19 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:19.520253 | orchestrator | 2026-03-01 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:22.567100 | orchestrator | 2026-03-01 01:14:22 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:22.571060 | orchestrator | 2026-03-01 01:14:22 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:22.571113 | orchestrator | 2026-03-01 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:25.614928 | orchestrator | 2026-03-01 01:14:25 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:25.617200 | orchestrator | 2026-03-01 01:14:25 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:25.617241 | orchestrator | 2026-03-01 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:28.670677 | orchestrator | 2026-03-01 01:14:28 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:28.672134 | orchestrator | 2026-03-01 01:14:28 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:28.672188 | orchestrator | 2026-03-01 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:31.712714 | orchestrator | 2026-03-01 01:14:31 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:31.715134 | orchestrator | 2026-03-01 01:14:31 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:31.715189 | orchestrator | 2026-03-01 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:34.761010 | orchestrator | 2026-03-01 01:14:34 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:34.763223 | orchestrator | 2026-03-01 01:14:34 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:34.763354 | orchestrator | 2026-03-01 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:37.810362 | orchestrator | 2026-03-01 01:14:37 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:37.812489 | orchestrator | 2026-03-01 01:14:37 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:37.812965 | orchestrator | 2026-03-01 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:40.870212 | orchestrator | 2026-03-01 01:14:40 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:40.871529 | orchestrator | 2026-03-01 01:14:40 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:40.871589 | orchestrator | 2026-03-01 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:43.921292 | orchestrator | 2026-03-01 01:14:43 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:43.923547 | orchestrator | 2026-03-01 01:14:43 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:43.923614 | orchestrator | 2026-03-01 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:46.972928 | orchestrator | 2026-03-01 01:14:46 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:46.974856 | orchestrator | 2026-03-01 01:14:46 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:46.974901 | orchestrator | 2026-03-01 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:50.024653 | orchestrator | 2026-03-01 01:14:50 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:50.025370 | orchestrator | 2026-03-01 01:14:50 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:50.025461 | orchestrator | 2026-03-01 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:53.063820 | orchestrator | 2026-03-01 01:14:53 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:53.065563 | orchestrator | 2026-03-01 01:14:53 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:53.065606 | orchestrator | 2026-03-01 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:56.109126 | orchestrator | 2026-03-01 01:14:56 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:56.110518 | orchestrator | 2026-03-01 01:14:56 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:56.110560 | orchestrator | 2026-03-01 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:14:59.154332 | orchestrator | 2026-03-01 01:14:59 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:14:59.156491 | orchestrator | 2026-03-01 01:14:59 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:14:59.156530 | orchestrator | 2026-03-01 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:02.207341 | orchestrator | 2026-03-01 01:15:02 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:02.209268 | orchestrator | 2026-03-01 01:15:02 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:02.209362 | orchestrator | 2026-03-01 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:05.253123 | orchestrator | 2026-03-01 01:15:05 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:05.253904 | orchestrator | 2026-03-01 01:15:05 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:05.253953 | orchestrator | 2026-03-01 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:08.294419 | orchestrator | 2026-03-01 01:15:08 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:08.296805 | orchestrator | 2026-03-01 01:15:08 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:08.296858 | orchestrator | 2026-03-01 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:11.346140 | orchestrator | 2026-03-01 01:15:11 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:11.346714 | orchestrator | 2026-03-01 01:15:11 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:11.346756 | orchestrator | 2026-03-01 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:14.387441 | orchestrator | 2026-03-01 01:15:14 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:14.387666 | orchestrator | 2026-03-01 01:15:14 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:14.387683 | orchestrator | 2026-03-01 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:17.418774 | orchestrator | 2026-03-01 01:15:17 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:17.418841 | orchestrator | 2026-03-01 01:15:17 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:17.418851 | orchestrator | 2026-03-01 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:20.455260 | orchestrator | 2026-03-01 01:15:20 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:20.459014 | orchestrator | 2026-03-01 01:15:20 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:20.459071 | orchestrator | 2026-03-01 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:23.499889 | orchestrator | 2026-03-01 01:15:23 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:23.501739 | orchestrator | 2026-03-01 01:15:23 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:23.501842 | orchestrator | 2026-03-01 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:26.537284 | orchestrator | 2026-03-01 01:15:26 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:26.539522 | orchestrator | 2026-03-01 01:15:26 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:26.539578 | orchestrator | 2026-03-01 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:29.577752 | orchestrator | 2026-03-01 01:15:29 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:29.579364 | orchestrator | 2026-03-01 01:15:29 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:29.579445 | orchestrator | 2026-03-01 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:32.623618 | orchestrator | 2026-03-01 01:15:32 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:32.626071 | orchestrator | 2026-03-01 01:15:32 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:32.626117 | orchestrator | 2026-03-01 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:35.652599 | orchestrator | 2026-03-01 01:15:35 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:35.653270 | orchestrator | 2026-03-01 01:15:35 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:35.653299 | orchestrator | 2026-03-01 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:38.691567 | orchestrator | 2026-03-01 01:15:38 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:38.693090 | orchestrator | 2026-03-01 01:15:38 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:38.693139 | orchestrator | 2026-03-01 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:41.726497 | orchestrator | 2026-03-01 01:15:41 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:41.727785 | orchestrator | 2026-03-01 01:15:41 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:41.727824 | orchestrator | 2026-03-01 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:44.766885 | orchestrator | 2026-03-01 01:15:44 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:44.768420 | orchestrator | 2026-03-01 01:15:44 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:44.768478 | orchestrator | 2026-03-01 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:47.826095 | orchestrator | 2026-03-01 01:15:47 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:47.828010 | orchestrator | 2026-03-01 01:15:47 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:47.828218 | orchestrator | 2026-03-01 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:50.869238 | orchestrator | 2026-03-01 01:15:50 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:50.871264 | orchestrator | 2026-03-01 01:15:50 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:50.871306 | orchestrator | 2026-03-01 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:53.913451 | orchestrator | 2026-03-01 01:15:53 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:53.914301 | orchestrator | 2026-03-01 01:15:53 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:53.914672 | orchestrator | 2026-03-01 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:56.944527 | orchestrator | 2026-03-01 01:15:56 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:56.946374 | orchestrator | 2026-03-01 01:15:56 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:56.946462 | orchestrator | 2026-03-01 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:15:59.974705 | orchestrator | 2026-03-01 01:15:59 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:15:59.975266 | orchestrator | 2026-03-01 01:15:59 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:15:59.975406 | orchestrator | 2026-03-01 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:03.011669 | orchestrator | 2026-03-01 01:16:03 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:03.012458 | orchestrator | 2026-03-01 01:16:03 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:03.012561 | orchestrator | 2026-03-01 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:06.056356 | orchestrator | 2026-03-01 01:16:06 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:06.057800 | orchestrator | 2026-03-01 01:16:06 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:06.057849 | orchestrator | 2026-03-01 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:09.104838 | orchestrator | 2026-03-01 01:16:09 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:09.105329 | orchestrator | 2026-03-01 01:16:09 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:09.105371 | orchestrator | 2026-03-01 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:12.152489 | orchestrator | 2026-03-01 01:16:12 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:12.153925 | orchestrator | 2026-03-01 01:16:12 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:12.154175 | orchestrator | 2026-03-01 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:15.201894 | orchestrator | 2026-03-01 01:16:15 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:15.204749 | orchestrator | 2026-03-01 01:16:15 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:15.204809 | orchestrator | 2026-03-01 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:18.255255 | orchestrator | 2026-03-01 01:16:18 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:18.256734 | orchestrator | 2026-03-01 01:16:18 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:18.256773 | orchestrator | 2026-03-01 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:21.311635 | orchestrator | 2026-03-01 01:16:21 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:21.311711 | orchestrator | 2026-03-01 01:16:21 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:21.311718 | orchestrator | 2026-03-01 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:24.345157 | orchestrator | 2026-03-01 01:16:24 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:24.347436 | orchestrator | 2026-03-01 01:16:24 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:24.347717 | orchestrator | 2026-03-01 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:27.395006 | orchestrator | 2026-03-01 01:16:27 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:27.396425 | orchestrator | 2026-03-01 01:16:27 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:27.396470 | orchestrator | 2026-03-01 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:30.444002 | orchestrator | 2026-03-01 01:16:30 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:30.447536 | orchestrator | 2026-03-01 01:16:30 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:30.447586 | orchestrator | 2026-03-01 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:33.498533 | orchestrator | 2026-03-01 01:16:33 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:33.500394 | orchestrator | 2026-03-01 01:16:33 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:33.500456 | orchestrator | 2026-03-01 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:36.546428 | orchestrator | 2026-03-01 01:16:36 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:36.549696 | orchestrator | 2026-03-01 01:16:36 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:36.550400 | orchestrator | 2026-03-01 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:39.587733 | orchestrator | 2026-03-01 01:16:39 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:39.588260 | orchestrator | 2026-03-01 01:16:39 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:39.588276 | orchestrator | 2026-03-01 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:42.626795 | orchestrator | 2026-03-01 01:16:42 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:42.631478 | orchestrator | 2026-03-01 01:16:42 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state STARTED 2026-03-01 01:16:42.631523 | orchestrator | 2026-03-01 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:45.673690 | orchestrator | 2026-03-01 01:16:45 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:45.679584 | orchestrator | 2026-03-01 01:16:45 | INFO  | Task 66d978de-e5ce-4f93-9c33-55056d9b03dc is in state SUCCESS 2026-03-01 01:16:45.680606 | orchestrator | 2026-03-01 01:16:45.680648 | orchestrator | 2026-03-01 01:16:45.680656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:16:45.680663 | orchestrator | 2026-03-01 01:16:45.680672 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-01 01:16:45.680680 | orchestrator | Sunday 01 March 2026 01:08:15 +0000 (0:00:00.277) 0:00:00.277 ********** 2026-03-01 01:16:45.680686 | orchestrator | changed: [testbed-manager] 2026-03-01 01:16:45.680693 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680700 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.680706 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.680712 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.680719 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.680725 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.680730 | orchestrator | 2026-03-01 01:16:45.680734 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:16:45.680738 | orchestrator | Sunday 01 March 2026 01:08:15 +0000 (0:00:00.852) 0:00:01.129 ********** 2026-03-01 01:16:45.680742 | orchestrator | changed: [testbed-manager] 2026-03-01 01:16:45.680745 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680757 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.680761 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.680775 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.680780 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.680783 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.680787 | orchestrator | 2026-03-01 01:16:45.680791 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:16:45.680795 | orchestrator | Sunday 01 March 2026 01:08:16 +0000 (0:00:00.631) 0:00:01.760 ********** 2026-03-01 01:16:45.680798 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-01 01:16:45.680802 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-01 01:16:45.680806 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-01 01:16:45.680810 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-01 01:16:45.680813 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-01 01:16:45.680817 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-01 01:16:45.680820 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-01 01:16:45.680829 | orchestrator | 2026-03-01 01:16:45.680833 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-01 01:16:45.680837 | orchestrator | 2026-03-01 01:16:45.680841 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-01 01:16:45.680844 | orchestrator | Sunday 01 March 2026 01:08:17 +0000 (0:00:00.877) 0:00:02.637 ********** 2026-03-01 01:16:45.680848 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.680852 | orchestrator | 2026-03-01 01:16:45.680856 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-01 01:16:45.680859 | orchestrator | Sunday 01 March 2026 01:08:18 +0000 (0:00:00.706) 0:00:03.344 ********** 2026-03-01 01:16:45.680863 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-01 01:16:45.680867 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-01 01:16:45.680871 | orchestrator | 2026-03-01 01:16:45.680875 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-01 01:16:45.680879 | orchestrator | Sunday 01 March 2026 01:08:21 +0000 (0:00:03.697) 0:00:07.042 ********** 2026-03-01 01:16:45.680882 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:16:45.680886 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-01 01:16:45.680890 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680894 | orchestrator | 2026-03-01 01:16:45.680897 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-01 01:16:45.680901 | orchestrator | Sunday 01 March 2026 01:08:26 +0000 (0:00:04.532) 0:00:11.574 ********** 2026-03-01 01:16:45.680905 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680909 | orchestrator | 2026-03-01 01:16:45.680912 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-01 01:16:45.680916 | orchestrator | Sunday 01 March 2026 01:08:27 +0000 (0:00:00.704) 0:00:12.279 ********** 2026-03-01 01:16:45.680920 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680923 | orchestrator | 2026-03-01 01:16:45.680927 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-01 01:16:45.680931 | orchestrator | Sunday 01 March 2026 01:08:29 +0000 (0:00:02.054) 0:00:14.334 ********** 2026-03-01 01:16:45.680934 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680938 | orchestrator | 2026-03-01 01:16:45.680942 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-01 01:16:45.680946 | orchestrator | Sunday 01 March 2026 01:08:32 +0000 (0:00:03.729) 0:00:18.063 ********** 2026-03-01 01:16:45.680949 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.680953 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.680957 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.680960 | orchestrator | 2026-03-01 01:16:45.680964 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-01 01:16:45.680968 | orchestrator | Sunday 01 March 2026 01:08:33 +0000 (0:00:00.273) 0:00:18.337 ********** 2026-03-01 01:16:45.680974 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.680978 | orchestrator | 2026-03-01 01:16:45.680982 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-01 01:16:45.680986 | orchestrator | Sunday 01 March 2026 01:09:05 +0000 (0:00:32.826) 0:00:51.164 ********** 2026-03-01 01:16:45.680989 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.680993 | orchestrator | 2026-03-01 01:16:45.680997 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-01 01:16:45.681001 | orchestrator | Sunday 01 March 2026 01:09:22 +0000 (0:00:16.143) 0:01:07.307 ********** 2026-03-01 01:16:45.681004 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.681008 | orchestrator | 2026-03-01 01:16:45.681012 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-01 01:16:45.681015 | orchestrator | Sunday 01 March 2026 01:09:38 +0000 (0:00:16.642) 0:01:23.950 ********** 2026-03-01 01:16:45.681040 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.681046 | orchestrator | 2026-03-01 01:16:45.681049 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-01 01:16:45.681053 | orchestrator | Sunday 01 March 2026 01:09:39 +0000 (0:00:01.255) 0:01:25.205 ********** 2026-03-01 01:16:45.681057 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681061 | orchestrator | 2026-03-01 01:16:45.681064 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-01 01:16:45.681068 | orchestrator | Sunday 01 March 2026 01:09:40 +0000 (0:00:00.504) 0:01:25.710 ********** 2026-03-01 01:16:45.681072 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.681076 | orchestrator | 2026-03-01 01:16:45.681099 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-01 01:16:45.681103 | orchestrator | Sunday 01 March 2026 01:09:41 +0000 (0:00:00.559) 0:01:26.269 ********** 2026-03-01 01:16:45.681107 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.681110 | orchestrator | 2026-03-01 01:16:45.681114 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-01 01:16:45.681121 | orchestrator | Sunday 01 March 2026 01:09:59 +0000 (0:00:18.496) 0:01:44.765 ********** 2026-03-01 01:16:45.681124 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681128 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681147 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681154 | orchestrator | 2026-03-01 01:16:45.681159 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-01 01:16:45.681200 | orchestrator | 2026-03-01 01:16:45.681207 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-01 01:16:45.681212 | orchestrator | Sunday 01 March 2026 01:09:59 +0000 (0:00:00.365) 0:01:45.131 ********** 2026-03-01 01:16:45.681219 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.681226 | orchestrator | 2026-03-01 01:16:45.681233 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-01 01:16:45.681239 | orchestrator | Sunday 01 March 2026 01:10:00 +0000 (0:00:00.644) 0:01:45.776 ********** 2026-03-01 01:16:45.681245 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681252 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681259 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.681267 | orchestrator | 2026-03-01 01:16:45.681272 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-01 01:16:45.681276 | orchestrator | Sunday 01 March 2026 01:10:02 +0000 (0:00:02.040) 0:01:47.816 ********** 2026-03-01 01:16:45.681287 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681302 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681310 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.681315 | orchestrator | 2026-03-01 01:16:45.681319 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-01 01:16:45.681331 | orchestrator | Sunday 01 March 2026 01:10:04 +0000 (0:00:02.153) 0:01:49.970 ********** 2026-03-01 01:16:45.681373 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681378 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681382 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681387 | orchestrator | 2026-03-01 01:16:45.681391 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-01 01:16:45.681396 | orchestrator | Sunday 01 March 2026 01:10:05 +0000 (0:00:00.328) 0:01:50.299 ********** 2026-03-01 01:16:45.681400 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-01 01:16:45.681405 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681409 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-01 01:16:45.681414 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681418 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-01 01:16:45.681423 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-01 01:16:45.681427 | orchestrator | 2026-03-01 01:16:45.681430 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-01 01:16:45.681434 | orchestrator | Sunday 01 March 2026 01:10:11 +0000 (0:00:06.735) 0:01:57.035 ********** 2026-03-01 01:16:45.681438 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681442 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681445 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681449 | orchestrator | 2026-03-01 01:16:45.681453 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-01 01:16:45.681456 | orchestrator | Sunday 01 March 2026 01:10:12 +0000 (0:00:00.440) 0:01:57.476 ********** 2026-03-01 01:16:45.681460 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-01 01:16:45.681464 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681467 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-01 01:16:45.681471 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681475 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-01 01:16:45.681478 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681482 | orchestrator | 2026-03-01 01:16:45.681486 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-01 01:16:45.681489 | orchestrator | Sunday 01 March 2026 01:10:12 +0000 (0:00:00.643) 0:01:58.119 ********** 2026-03-01 01:16:45.681493 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681497 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681500 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.681504 | orchestrator | 2026-03-01 01:16:45.681508 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-01 01:16:45.681511 | orchestrator | Sunday 01 March 2026 01:10:13 +0000 (0:00:00.721) 0:01:58.840 ********** 2026-03-01 01:16:45.681515 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681519 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681522 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.681526 | orchestrator | 2026-03-01 01:16:45.681530 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-01 01:16:45.681533 | orchestrator | Sunday 01 March 2026 01:10:14 +0000 (0:00:01.262) 0:02:00.103 ********** 2026-03-01 01:16:45.681537 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681541 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681549 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.681553 | orchestrator | 2026-03-01 01:16:45.681557 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-01 01:16:45.681560 | orchestrator | Sunday 01 March 2026 01:10:17 +0000 (0:00:02.433) 0:02:02.537 ********** 2026-03-01 01:16:45.681564 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681568 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681571 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.681575 | orchestrator | 2026-03-01 01:16:45.681579 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-01 01:16:45.681585 | orchestrator | Sunday 01 March 2026 01:10:39 +0000 (0:00:22.523) 0:02:25.060 ********** 2026-03-01 01:16:45.681589 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681593 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681596 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.681600 | orchestrator | 2026-03-01 01:16:45.681604 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-01 01:16:45.681608 | orchestrator | Sunday 01 March 2026 01:10:55 +0000 (0:00:16.021) 0:02:41.082 ********** 2026-03-01 01:16:45.681614 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.681618 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681621 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681625 | orchestrator | 2026-03-01 01:16:45.681629 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-01 01:16:45.681633 | orchestrator | Sunday 01 March 2026 01:10:56 +0000 (0:00:00.985) 0:02:42.068 ********** 2026-03-01 01:16:45.681636 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681640 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681643 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.681647 | orchestrator | 2026-03-01 01:16:45.681651 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-01 01:16:45.681654 | orchestrator | Sunday 01 March 2026 01:11:09 +0000 (0:00:12.974) 0:02:55.042 ********** 2026-03-01 01:16:45.681658 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681662 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681665 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681669 | orchestrator | 2026-03-01 01:16:45.681673 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-01 01:16:45.681676 | orchestrator | Sunday 01 March 2026 01:11:11 +0000 (0:00:01.229) 0:02:56.272 ********** 2026-03-01 01:16:45.681680 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681684 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681687 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681691 | orchestrator | 2026-03-01 01:16:45.681695 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-01 01:16:45.681698 | orchestrator | 2026-03-01 01:16:45.681702 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-01 01:16:45.681706 | orchestrator | Sunday 01 March 2026 01:11:11 +0000 (0:00:00.544) 0:02:56.816 ********** 2026-03-01 01:16:45.681709 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.681713 | orchestrator | 2026-03-01 01:16:45.681717 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-01 01:16:45.681721 | orchestrator | Sunday 01 March 2026 01:11:12 +0000 (0:00:00.549) 0:02:57.366 ********** 2026-03-01 01:16:45.681724 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-01 01:16:45.681728 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-01 01:16:45.681732 | orchestrator | 2026-03-01 01:16:45.681735 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-01 01:16:45.681739 | orchestrator | Sunday 01 March 2026 01:11:15 +0000 (0:00:03.427) 0:03:00.793 ********** 2026-03-01 01:16:45.681743 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-01 01:16:45.681747 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-01 01:16:45.681751 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-01 01:16:45.681755 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-01 01:16:45.681758 | orchestrator | 2026-03-01 01:16:45.681762 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-01 01:16:45.681766 | orchestrator | Sunday 01 March 2026 01:11:22 +0000 (0:00:06.549) 0:03:07.343 ********** 2026-03-01 01:16:45.681772 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:16:45.681776 | orchestrator | 2026-03-01 01:16:45.681779 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-01 01:16:45.681783 | orchestrator | Sunday 01 March 2026 01:11:24 +0000 (0:00:02.847) 0:03:10.191 ********** 2026-03-01 01:16:45.681787 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-01 01:16:45.681791 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:16:45.681794 | orchestrator | 2026-03-01 01:16:45.681798 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-01 01:16:45.681802 | orchestrator | Sunday 01 March 2026 01:11:28 +0000 (0:00:03.583) 0:03:13.774 ********** 2026-03-01 01:16:45.681805 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:16:45.681809 | orchestrator | 2026-03-01 01:16:45.681812 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-01 01:16:45.681818 | orchestrator | Sunday 01 March 2026 01:11:31 +0000 (0:00:02.939) 0:03:16.713 ********** 2026-03-01 01:16:45.681824 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-01 01:16:45.681830 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-01 01:16:45.681836 | orchestrator | 2026-03-01 01:16:45.681842 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-01 01:16:45.681853 | orchestrator | Sunday 01 March 2026 01:11:37 +0000 (0:00:06.494) 0:03:23.208 ********** 2026-03-01 01:16:45.681865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.681874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.681885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.681894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.681901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.681905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.681909 | orchestrator | 2026-03-01 01:16:45.681913 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-01 01:16:45.681917 | orchestrator | Sunday 01 March 2026 01:11:39 +0000 (0:00:01.375) 0:03:24.583 ********** 2026-03-01 01:16:45.681934 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681938 | orchestrator | 2026-03-01 01:16:45.681942 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-01 01:16:45.681946 | orchestrator | Sunday 01 March 2026 01:11:39 +0000 (0:00:00.129) 0:03:24.713 ********** 2026-03-01 01:16:45.681957 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.681961 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.681966 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.681982 | orchestrator | 2026-03-01 01:16:45.681989 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-01 01:16:45.681993 | orchestrator | Sunday 01 March 2026 01:11:39 +0000 (0:00:00.497) 0:03:25.211 ********** 2026-03-01 01:16:45.682000 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-01 01:16:45.682004 | orchestrator | 2026-03-01 01:16:45.682007 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-01 01:16:45.682011 | orchestrator | Sunday 01 March 2026 01:11:40 +0000 (0:00:00.741) 0:03:25.953 ********** 2026-03-01 01:16:45.682081 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682085 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682088 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682092 | orchestrator | 2026-03-01 01:16:45.682096 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-01 01:16:45.682100 | orchestrator | Sunday 01 March 2026 01:11:41 +0000 (0:00:00.293) 0:03:26.246 ********** 2026-03-01 01:16:45.682103 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.682107 | orchestrator | 2026-03-01 01:16:45.682111 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-01 01:16:45.682115 | orchestrator | Sunday 01 March 2026 01:11:41 +0000 (0:00:00.556) 0:03:26.802 ********** 2026-03-01 01:16:45.682119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682160 | orchestrator | 2026-03-01 01:16:45.682164 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-01 01:16:45.682168 | orchestrator | Sunday 01 March 2026 01:11:44 +0000 (0:00:02.780) 0:03:29.583 ********** 2026-03-01 01:16:45.682177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682188 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682230 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682250 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682254 | orchestrator | 2026-03-01 01:16:45.682258 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-01 01:16:45.682261 | orchestrator | Sunday 01 March 2026 01:11:44 +0000 (0:00:00.643) 0:03:30.227 ********** 2026-03-01 01:16:45.682265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682274 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682294 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682306 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682309 | orchestrator | 2026-03-01 01:16:45.682313 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-01 01:16:45.682317 | orchestrator | Sunday 01 March 2026 01:11:45 +0000 (0:00:00.817) 0:03:31.044 ********** 2026-03-01 01:16:45.682323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682362 | orchestrator | 2026-03-01 01:16:45.682367 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-01 01:16:45.682371 | orchestrator | Sunday 01 March 2026 01:11:48 +0000 (0:00:02.722) 0:03:33.766 ********** 2026-03-01 01:16:45.682376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682408 | orchestrator | 2026-03-01 01:16:45.682411 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-01 01:16:45.682415 | orchestrator | Sunday 01 March 2026 01:11:54 +0000 (0:00:05.567) 0:03:39.334 ********** 2026-03-01 01:16:45.682419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682433 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682446 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-01 01:16:45.682455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.682458 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682462 | orchestrator | 2026-03-01 01:16:45.682466 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-01 01:16:45.682472 | orchestrator | Sunday 01 March 2026 01:11:54 +0000 (0:00:00.588) 0:03:39.922 ********** 2026-03-01 01:16:45.682476 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.682480 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.682483 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.682487 | orchestrator | 2026-03-01 01:16:45.682493 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-01 01:16:45.682497 | orchestrator | Sunday 01 March 2026 01:11:56 +0000 (0:00:01.467) 0:03:41.390 ********** 2026-03-01 01:16:45.682500 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682504 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682508 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682512 | orchestrator | 2026-03-01 01:16:45.682515 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-01 01:16:45.682519 | orchestrator | Sunday 01 March 2026 01:11:56 +0000 (0:00:00.352) 0:03:41.743 ********** 2026-03-01 01:16:45.682526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-01 01:16:45.682555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.682576 | orchestrator | 2026-03-01 01:16:45.682580 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-01 01:16:45.682584 | orchestrator | Sunday 01 March 2026 01:11:58 +0000 (0:00:01.914) 0:03:43.658 ********** 2026-03-01 01:16:45.682588 | orchestrator | 2026-03-01 01:16:45.682593 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-01 01:16:45.682599 | orchestrator | Sunday 01 March 2026 01:11:58 +0000 (0:00:00.127) 0:03:43.786 ********** 2026-03-01 01:16:45.682605 | orchestrator | 2026-03-01 01:16:45.682612 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-01 01:16:45.682618 | orchestrator | Sunday 01 March 2026 01:11:58 +0000 (0:00:00.123) 0:03:43.909 ********** 2026-03-01 01:16:45.682625 | orchestrator | 2026-03-01 01:16:45.682631 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-01 01:16:45.682638 | orchestrator | Sunday 01 March 2026 01:11:58 +0000 (0:00:00.127) 0:03:44.036 ********** 2026-03-01 01:16:45.682644 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.682648 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.682652 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.682655 | orchestrator | 2026-03-01 01:16:45.682659 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-01 01:16:45.682663 | orchestrator | Sunday 01 March 2026 01:12:21 +0000 (0:00:23.150) 0:04:07.187 ********** 2026-03-01 01:16:45.682667 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.682670 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.682677 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.682681 | orchestrator | 2026-03-01 01:16:45.682685 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-01 01:16:45.682688 | orchestrator | 2026-03-01 01:16:45.682692 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-01 01:16:45.682696 | orchestrator | Sunday 01 March 2026 01:12:27 +0000 (0:00:05.667) 0:04:12.855 ********** 2026-03-01 01:16:45.682700 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.682704 | orchestrator | 2026-03-01 01:16:45.682707 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-01 01:16:45.682711 | orchestrator | Sunday 01 March 2026 01:12:28 +0000 (0:00:01.182) 0:04:14.038 ********** 2026-03-01 01:16:45.682725 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.682729 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.682733 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.682736 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682740 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682744 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682748 | orchestrator | 2026-03-01 01:16:45.682752 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-01 01:16:45.682763 | orchestrator | Sunday 01 March 2026 01:12:29 +0000 (0:00:00.597) 0:04:14.636 ********** 2026-03-01 01:16:45.682767 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682771 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682774 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682782 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:16:45.682785 | orchestrator | 2026-03-01 01:16:45.682789 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-01 01:16:45.682796 | orchestrator | Sunday 01 March 2026 01:12:30 +0000 (0:00:01.090) 0:04:15.727 ********** 2026-03-01 01:16:45.682800 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-01 01:16:45.682804 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-01 01:16:45.682807 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-01 01:16:45.682820 | orchestrator | 2026-03-01 01:16:45.682825 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-01 01:16:45.682834 | orchestrator | Sunday 01 March 2026 01:12:31 +0000 (0:00:00.710) 0:04:16.437 ********** 2026-03-01 01:16:45.682838 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-01 01:16:45.682842 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-01 01:16:45.682846 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-01 01:16:45.682849 | orchestrator | 2026-03-01 01:16:45.682853 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-01 01:16:45.682857 | orchestrator | Sunday 01 March 2026 01:12:32 +0000 (0:00:01.318) 0:04:17.756 ********** 2026-03-01 01:16:45.682860 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-01 01:16:45.682867 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.682871 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-01 01:16:45.682874 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.682878 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-01 01:16:45.682882 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.682885 | orchestrator | 2026-03-01 01:16:45.682889 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-01 01:16:45.682893 | orchestrator | Sunday 01 March 2026 01:12:33 +0000 (0:00:00.559) 0:04:18.315 ********** 2026-03-01 01:16:45.682897 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 01:16:45.682900 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 01:16:45.682907 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682911 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 01:16:45.682914 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 01:16:45.682918 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682922 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-01 01:16:45.682926 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-01 01:16:45.682929 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-01 01:16:45.682933 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-01 01:16:45.682937 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-01 01:16:45.682940 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682944 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-01 01:16:45.682957 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-01 01:16:45.682965 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-01 01:16:45.682969 | orchestrator | 2026-03-01 01:16:45.682973 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-01 01:16:45.682977 | orchestrator | Sunday 01 March 2026 01:12:34 +0000 (0:00:01.124) 0:04:19.440 ********** 2026-03-01 01:16:45.682980 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.682984 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.682988 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.682992 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.682995 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.682999 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.683003 | orchestrator | 2026-03-01 01:16:45.683007 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-01 01:16:45.683011 | orchestrator | Sunday 01 March 2026 01:12:35 +0000 (0:00:00.970) 0:04:20.410 ********** 2026-03-01 01:16:45.683014 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.683018 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.683034 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.683038 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.683042 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.683046 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.683050 | orchestrator | 2026-03-01 01:16:45.683053 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-01 01:16:45.683057 | orchestrator | Sunday 01 March 2026 01:12:36 +0000 (0:00:01.677) 0:04:22.088 ********** 2026-03-01 01:16:45.683061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683147 | orchestrator | 2026-03-01 01:16:45.683151 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-01 01:16:45.683155 | orchestrator | Sunday 01 March 2026 01:12:38 +0000 (0:00:02.080) 0:04:24.169 ********** 2026-03-01 01:16:45.683158 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:16:45.683163 | orchestrator | 2026-03-01 01:16:45.683167 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-01 01:16:45.683170 | orchestrator | Sunday 01 March 2026 01:12:40 +0000 (0:00:01.186) 0:04:25.356 ********** 2026-03-01 01:16:45.683174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.683252 | orchestrator | 2026-03-01 01:16:45.683256 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-01 01:16:45.683260 | orchestrator | Sunday 01 March 2026 01:12:43 +0000 (0:00:03.343) 0:04:28.699 ********** 2026-03-01 01:16:45.683266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.683272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.683277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683281 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.683285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.683289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.683296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683300 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.683306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.683311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.683314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683318 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.683322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.683330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683333 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.683339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.683343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.683354 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.683358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683361 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.683365 | orchestrator | 2026-03-01 01:16:45.683369 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-01 01:16:45.683373 | orchestrator | Sunday 01 March 2026 01:12:44 +0000 (0:00:01.274) 0:04:29.973 ********** 2026-03-01 01:16:45.683377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.683383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.683389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683394 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.683399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.683403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.683407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683414 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.683418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.683422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.683428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683432 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.683438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.683442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683446 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.683450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.683456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683460 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.683464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.683658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.683666 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.683670 | orchestrator | 2026-03-01 01:16:45.683674 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-01 01:16:45.683677 | orchestrator | Sunday 01 March 2026 01:12:46 +0000 (0:00:01.818) 0:04:31.792 ********** 2026-03-01 01:16:45.683681 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.683685 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.683688 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.683692 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-01 01:16:45.683696 | orchestrator | 2026-03-01 01:16:45.683700 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-01 01:16:45.683706 | orchestrator | Sunday 01 March 2026 01:12:47 +0000 (0:00:00.897) 0:04:32.690 ********** 2026-03-01 01:16:45.683710 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 01:16:45.683714 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-01 01:16:45.683717 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-01 01:16:45.683721 | orchestrator | 2026-03-01 01:16:45.683725 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-01 01:16:45.683729 | orchestrator | Sunday 01 March 2026 01:12:48 +0000 (0:00:00.883) 0:04:33.573 ********** 2026-03-01 01:16:45.683732 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 01:16:45.683736 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-01 01:16:45.683740 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-01 01:16:45.683743 | orchestrator | 2026-03-01 01:16:45.683747 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-01 01:16:45.683754 | orchestrator | Sunday 01 March 2026 01:12:49 +0000 (0:00:00.918) 0:04:34.492 ********** 2026-03-01 01:16:45.683758 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:16:45.683762 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:16:45.683766 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:16:45.683769 | orchestrator | 2026-03-01 01:16:45.683773 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-01 01:16:45.683777 | orchestrator | Sunday 01 March 2026 01:12:49 +0000 (0:00:00.493) 0:04:34.985 ********** 2026-03-01 01:16:45.683780 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:16:45.683784 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:16:45.683788 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:16:45.683791 | orchestrator | 2026-03-01 01:16:45.683795 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-01 01:16:45.683799 | orchestrator | Sunday 01 March 2026 01:12:50 +0000 (0:00:00.629) 0:04:35.615 ********** 2026-03-01 01:16:45.683802 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-01 01:16:45.683806 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-01 01:16:45.683810 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-01 01:16:45.683813 | orchestrator | 2026-03-01 01:16:45.683817 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-01 01:16:45.683821 | orchestrator | Sunday 01 March 2026 01:12:51 +0000 (0:00:01.028) 0:04:36.643 ********** 2026-03-01 01:16:45.683824 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-01 01:16:45.683828 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-01 01:16:45.683832 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-01 01:16:45.683836 | orchestrator | 2026-03-01 01:16:45.683839 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-01 01:16:45.683843 | orchestrator | Sunday 01 March 2026 01:12:52 +0000 (0:00:01.022) 0:04:37.665 ********** 2026-03-01 01:16:45.683847 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-01 01:16:45.683850 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-01 01:16:45.683854 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-01 01:16:45.683857 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-01 01:16:45.683861 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-01 01:16:45.683865 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-01 01:16:45.683868 | orchestrator | 2026-03-01 01:16:45.683872 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-01 01:16:45.683876 | orchestrator | Sunday 01 March 2026 01:12:55 +0000 (0:00:03.469) 0:04:41.134 ********** 2026-03-01 01:16:45.683879 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.683883 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.683887 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.683890 | orchestrator | 2026-03-01 01:16:45.683894 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-01 01:16:45.683898 | orchestrator | Sunday 01 March 2026 01:12:56 +0000 (0:00:00.404) 0:04:41.538 ********** 2026-03-01 01:16:45.683901 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.683905 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.683909 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.683913 | orchestrator | 2026-03-01 01:16:45.683916 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-01 01:16:45.683920 | orchestrator | Sunday 01 March 2026 01:12:56 +0000 (0:00:00.304) 0:04:41.843 ********** 2026-03-01 01:16:45.683924 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.683927 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.683931 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.683935 | orchestrator | 2026-03-01 01:16:45.683938 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-01 01:16:45.683942 | orchestrator | Sunday 01 March 2026 01:12:57 +0000 (0:00:01.261) 0:04:43.105 ********** 2026-03-01 01:16:45.683952 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-01 01:16:45.683957 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-01 01:16:45.683961 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-01 01:16:45.683964 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-01 01:16:45.683968 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-01 01:16:45.683972 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-01 01:16:45.683976 | orchestrator | 2026-03-01 01:16:45.683983 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-01 01:16:45.683987 | orchestrator | Sunday 01 March 2026 01:13:01 +0000 (0:00:03.387) 0:04:46.492 ********** 2026-03-01 01:16:45.683991 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 01:16:45.683995 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 01:16:45.683998 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 01:16:45.684002 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-01 01:16:45.684006 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-01 01:16:45.684009 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.684013 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.684017 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-01 01:16:45.684021 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.684075 | orchestrator | 2026-03-01 01:16:45.684079 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-01 01:16:45.684082 | orchestrator | Sunday 01 March 2026 01:13:04 +0000 (0:00:03.358) 0:04:49.851 ********** 2026-03-01 01:16:45.684086 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684090 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684094 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684097 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-01 01:16:45.684101 | orchestrator | 2026-03-01 01:16:45.684105 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-01 01:16:45.684109 | orchestrator | Sunday 01 March 2026 01:13:06 +0000 (0:00:01.718) 0:04:51.569 ********** 2026-03-01 01:16:45.684112 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 01:16:45.684116 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-01 01:16:45.684120 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-01 01:16:45.684124 | orchestrator | 2026-03-01 01:16:45.684128 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-01 01:16:45.684131 | orchestrator | Sunday 01 March 2026 01:13:07 +0000 (0:00:01.147) 0:04:52.716 ********** 2026-03-01 01:16:45.684135 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684139 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.684142 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.684146 | orchestrator | 2026-03-01 01:16:45.684150 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-01 01:16:45.684154 | orchestrator | Sunday 01 March 2026 01:13:07 +0000 (0:00:00.308) 0:04:53.025 ********** 2026-03-01 01:16:45.684157 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684161 | orchestrator | 2026-03-01 01:16:45.684165 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-01 01:16:45.684168 | orchestrator | Sunday 01 March 2026 01:13:07 +0000 (0:00:00.164) 0:04:53.189 ********** 2026-03-01 01:16:45.684175 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684179 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.684183 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.684186 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684190 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684194 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684197 | orchestrator | 2026-03-01 01:16:45.684201 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-01 01:16:45.684205 | orchestrator | Sunday 01 March 2026 01:13:08 +0000 (0:00:00.556) 0:04:53.745 ********** 2026-03-01 01:16:45.684209 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-01 01:16:45.684212 | orchestrator | 2026-03-01 01:16:45.684216 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-01 01:16:45.684220 | orchestrator | Sunday 01 March 2026 01:13:09 +0000 (0:00:00.928) 0:04:54.673 ********** 2026-03-01 01:16:45.684224 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684227 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.684231 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.684235 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684238 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684242 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684246 | orchestrator | 2026-03-01 01:16:45.684250 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-01 01:16:45.684253 | orchestrator | Sunday 01 March 2026 01:13:10 +0000 (0:00:00.616) 0:04:55.290 ********** 2026-03-01 01:16:45.684262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684361 | orchestrator | 2026-03-01 01:16:45.684369 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-01 01:16:45.684373 | orchestrator | Sunday 01 March 2026 01:13:14 +0000 (0:00:04.220) 0:04:59.511 ********** 2026-03-01 01:16:45.684377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.684382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.684387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.684394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.684401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.684409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.684414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.684462 | orchestrator | 2026-03-01 01:16:45.684466 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-01 01:16:45.684470 | orchestrator | Sunday 01 March 2026 01:13:20 +0000 (0:00:06.005) 0:05:05.516 ********** 2026-03-01 01:16:45.684475 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684479 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.684483 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.684488 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684494 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684498 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684503 | orchestrator | 2026-03-01 01:16:45.684507 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-01 01:16:45.684511 | orchestrator | Sunday 01 March 2026 01:13:22 +0000 (0:00:01.910) 0:05:07.426 ********** 2026-03-01 01:16:45.684516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-01 01:16:45.684520 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-01 01:16:45.684524 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-01 01:16:45.684529 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-01 01:16:45.684533 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-01 01:16:45.684541 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-01 01:16:45.684546 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684550 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-01 01:16:45.684555 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-01 01:16:45.684559 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684563 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-01 01:16:45.684568 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684572 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-01 01:16:45.684577 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-01 01:16:45.684581 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-01 01:16:45.684585 | orchestrator | 2026-03-01 01:16:45.684590 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-01 01:16:45.684594 | orchestrator | Sunday 01 March 2026 01:13:25 +0000 (0:00:03.270) 0:05:10.696 ********** 2026-03-01 01:16:45.684598 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684603 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.684607 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.684611 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684616 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684620 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684624 | orchestrator | 2026-03-01 01:16:45.684629 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-01 01:16:45.684633 | orchestrator | Sunday 01 March 2026 01:13:26 +0000 (0:00:00.578) 0:05:11.275 ********** 2026-03-01 01:16:45.684638 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-01 01:16:45.684642 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-01 01:16:45.684647 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-01 01:16:45.684651 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-01 01:16:45.684655 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-01 01:16:45.684660 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-01 01:16:45.684664 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-01 01:16:45.684668 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-01 01:16:45.684673 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-01 01:16:45.684678 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-01 01:16:45.684685 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684691 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-01 01:16:45.684698 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684705 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-01 01:16:45.684711 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684722 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-01 01:16:45.684729 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-01 01:16:45.684734 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-01 01:16:45.684739 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-01 01:16:45.684749 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-01 01:16:45.684755 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-01 01:16:45.684761 | orchestrator | 2026-03-01 01:16:45.684767 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-01 01:16:45.684772 | orchestrator | Sunday 01 March 2026 01:13:30 +0000 (0:00:04.932) 0:05:16.208 ********** 2026-03-01 01:16:45.684778 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-01 01:16:45.684783 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-01 01:16:45.684789 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-01 01:16:45.684796 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-01 01:16:45.684807 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-01 01:16:45.684814 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-01 01:16:45.684820 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-01 01:16:45.684825 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-01 01:16:45.684831 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-01 01:16:45.684836 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-01 01:16:45.684843 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-01 01:16:45.684849 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-01 01:16:45.684856 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-01 01:16:45.684862 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684868 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-01 01:16:45.684874 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684880 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-01 01:16:45.684887 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-01 01:16:45.684893 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684900 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-01 01:16:45.684905 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-01 01:16:45.684912 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-01 01:16:45.684918 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-01 01:16:45.684924 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-01 01:16:45.684930 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-01 01:16:45.684936 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-01 01:16:45.684947 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-01 01:16:45.684954 | orchestrator | 2026-03-01 01:16:45.684961 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-01 01:16:45.684966 | orchestrator | Sunday 01 March 2026 01:13:37 +0000 (0:00:06.933) 0:05:23.142 ********** 2026-03-01 01:16:45.684970 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.684973 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.684977 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.684981 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.684985 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.684988 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.684992 | orchestrator | 2026-03-01 01:16:45.684996 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-01 01:16:45.684999 | orchestrator | Sunday 01 March 2026 01:13:38 +0000 (0:00:00.798) 0:05:23.940 ********** 2026-03-01 01:16:45.685003 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685007 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.685010 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685014 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685018 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685032 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685039 | orchestrator | 2026-03-01 01:16:45.685045 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-01 01:16:45.685049 | orchestrator | Sunday 01 March 2026 01:13:39 +0000 (0:00:00.585) 0:05:24.526 ********** 2026-03-01 01:16:45.685052 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685056 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685060 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685063 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.685067 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.685070 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.685074 | orchestrator | 2026-03-01 01:16:45.685078 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-01 01:16:45.685082 | orchestrator | Sunday 01 March 2026 01:13:41 +0000 (0:00:02.152) 0:05:26.678 ********** 2026-03-01 01:16:45.685093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.685097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.685102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.685109 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.685113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.685117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.685124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.685128 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-01 01:16:45.685138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-01 01:16:45.685144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.685148 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.685156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.685160 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.685173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.685177 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-01 01:16:45.685187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-01 01:16:45.685191 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685195 | orchestrator | 2026-03-01 01:16:45.685198 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-01 01:16:45.685202 | orchestrator | Sunday 01 March 2026 01:13:42 +0000 (0:00:01.513) 0:05:28.192 ********** 2026-03-01 01:16:45.685206 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-01 01:16:45.685210 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-01 01:16:45.685213 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685217 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-01 01:16:45.685221 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-01 01:16:45.685225 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.685228 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-01 01:16:45.685232 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-01 01:16:45.685236 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685242 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-01 01:16:45.685248 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-01 01:16:45.685258 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685265 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-01 01:16:45.685272 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-01 01:16:45.685278 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685284 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-01 01:16:45.685290 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-01 01:16:45.685297 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685303 | orchestrator | 2026-03-01 01:16:45.685309 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-01 01:16:45.685315 | orchestrator | Sunday 01 March 2026 01:13:43 +0000 (0:00:00.859) 0:05:29.052 ********** 2026-03-01 01:16:45.685326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-01 01:16:45.685419 | orchestrator | 2026-03-01 01:16:45.685423 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-01 01:16:45.685427 | orchestrator | Sunday 01 March 2026 01:13:46 +0000 (0:00:02.830) 0:05:31.882 ********** 2026-03-01 01:16:45.685431 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685435 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.685439 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685443 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685446 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685450 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685454 | orchestrator | 2026-03-01 01:16:45.685458 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-01 01:16:45.685461 | orchestrator | Sunday 01 March 2026 01:13:47 +0000 (0:00:00.756) 0:05:32.638 ********** 2026-03-01 01:16:45.685465 | orchestrator | 2026-03-01 01:16:45.685469 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-01 01:16:45.685472 | orchestrator | Sunday 01 March 2026 01:13:47 +0000 (0:00:00.150) 0:05:32.789 ********** 2026-03-01 01:16:45.685476 | orchestrator | 2026-03-01 01:16:45.685480 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-01 01:16:45.685484 | orchestrator | Sunday 01 March 2026 01:13:47 +0000 (0:00:00.126) 0:05:32.915 ********** 2026-03-01 01:16:45.685487 | orchestrator | 2026-03-01 01:16:45.685491 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-01 01:16:45.685495 | orchestrator | Sunday 01 March 2026 01:13:47 +0000 (0:00:00.130) 0:05:33.045 ********** 2026-03-01 01:16:45.685499 | orchestrator | 2026-03-01 01:16:45.685503 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-01 01:16:45.685506 | orchestrator | Sunday 01 March 2026 01:13:48 +0000 (0:00:00.310) 0:05:33.355 ********** 2026-03-01 01:16:45.685510 | orchestrator | 2026-03-01 01:16:45.685514 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-01 01:16:45.685518 | orchestrator | Sunday 01 March 2026 01:13:48 +0000 (0:00:00.126) 0:05:33.482 ********** 2026-03-01 01:16:45.685522 | orchestrator | 2026-03-01 01:16:45.685525 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-01 01:16:45.685529 | orchestrator | Sunday 01 March 2026 01:13:48 +0000 (0:00:00.131) 0:05:33.613 ********** 2026-03-01 01:16:45.685533 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.685537 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.685540 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.685548 | orchestrator | 2026-03-01 01:16:45.685552 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-01 01:16:45.685555 | orchestrator | Sunday 01 March 2026 01:13:59 +0000 (0:00:11.305) 0:05:44.918 ********** 2026-03-01 01:16:45.685559 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.685563 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.685567 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.685571 | orchestrator | 2026-03-01 01:16:45.685575 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-01 01:16:45.685579 | orchestrator | Sunday 01 March 2026 01:14:18 +0000 (0:00:18.326) 0:06:03.245 ********** 2026-03-01 01:16:45.685583 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.685586 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.685590 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.685594 | orchestrator | 2026-03-01 01:16:45.685597 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-01 01:16:45.685601 | orchestrator | Sunday 01 March 2026 01:14:37 +0000 (0:00:19.480) 0:06:22.725 ********** 2026-03-01 01:16:45.685605 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.685609 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.685612 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.685616 | orchestrator | 2026-03-01 01:16:45.685620 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-01 01:16:45.685624 | orchestrator | Sunday 01 March 2026 01:15:07 +0000 (0:00:29.651) 0:06:52.376 ********** 2026-03-01 01:16:45.685630 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.685634 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.685637 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.685641 | orchestrator | 2026-03-01 01:16:45.685645 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-01 01:16:45.685649 | orchestrator | Sunday 01 March 2026 01:15:07 +0000 (0:00:00.714) 0:06:53.091 ********** 2026-03-01 01:16:45.685653 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.685657 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.685661 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.685664 | orchestrator | 2026-03-01 01:16:45.685668 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-01 01:16:45.685672 | orchestrator | Sunday 01 March 2026 01:15:08 +0000 (0:00:00.703) 0:06:53.795 ********** 2026-03-01 01:16:45.685676 | orchestrator | changed: [testbed-node-3] 2026-03-01 01:16:45.685680 | orchestrator | changed: [testbed-node-4] 2026-03-01 01:16:45.685683 | orchestrator | changed: [testbed-node-5] 2026-03-01 01:16:45.685687 | orchestrator | 2026-03-01 01:16:45.685691 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-01 01:16:45.685699 | orchestrator | Sunday 01 March 2026 01:15:32 +0000 (0:00:23.816) 0:07:17.611 ********** 2026-03-01 01:16:45.685703 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685707 | orchestrator | 2026-03-01 01:16:45.685711 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-01 01:16:45.685715 | orchestrator | Sunday 01 March 2026 01:15:32 +0000 (0:00:00.129) 0:07:17.740 ********** 2026-03-01 01:16:45.685718 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685722 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685726 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685730 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685734 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685738 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-01 01:16:45.685742 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:16:45.685745 | orchestrator | 2026-03-01 01:16:45.685749 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-01 01:16:45.685753 | orchestrator | Sunday 01 March 2026 01:15:53 +0000 (0:00:21.331) 0:07:39.072 ********** 2026-03-01 01:16:45.685760 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685764 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685768 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685771 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685775 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685779 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.685783 | orchestrator | 2026-03-01 01:16:45.685787 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-01 01:16:45.685791 | orchestrator | Sunday 01 March 2026 01:16:02 +0000 (0:00:08.663) 0:07:47.735 ********** 2026-03-01 01:16:45.685795 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.685800 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.685807 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.685813 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.685818 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.685824 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-01 01:16:45.685830 | orchestrator | 2026-03-01 01:16:45.685836 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-01 01:16:45.685844 | orchestrator | Sunday 01 March 2026 01:16:05 +0000 (0:00:03.462) 0:07:51.197 ********** 2026-03-01 01:16:45.685850 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:16:45.685857 | orchestrator | 2026-03-01 01:16:45.685864 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-01 01:16:45.685870 | orchestrator | Sunday 01 March 2026 01:16:19 +0000 (0:00:13.822) 0:08:05.020 ********** 2026-03-01 01:16:45.685877 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:16:45.685884 | orchestrator | 2026-03-01 01:16:45.685890 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-01 01:16:45.685895 | orchestrator | Sunday 01 March 2026 01:16:21 +0000 (0:00:01.448) 0:08:06.468 ********** 2026-03-01 01:16:45.685899 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.685903 | orchestrator | 2026-03-01 01:16:45.685906 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-01 01:16:45.685910 | orchestrator | Sunday 01 March 2026 01:16:22 +0000 (0:00:01.491) 0:08:07.960 ********** 2026-03-01 01:16:45.685914 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-01 01:16:45.685917 | orchestrator | 2026-03-01 01:16:45.685921 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-01 01:16:45.685925 | orchestrator | Sunday 01 March 2026 01:16:35 +0000 (0:00:12.747) 0:08:20.707 ********** 2026-03-01 01:16:45.685929 | orchestrator | ok: [testbed-node-3] 2026-03-01 01:16:45.685933 | orchestrator | ok: [testbed-node-4] 2026-03-01 01:16:45.685937 | orchestrator | ok: [testbed-node-5] 2026-03-01 01:16:45.685940 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:16:45.685944 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:16:45.685948 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:16:45.685951 | orchestrator | 2026-03-01 01:16:45.685955 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-01 01:16:45.685959 | orchestrator | 2026-03-01 01:16:45.685962 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-01 01:16:45.685966 | orchestrator | Sunday 01 March 2026 01:16:37 +0000 (0:00:01.907) 0:08:22.614 ********** 2026-03-01 01:16:45.685970 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:16:45.685974 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:16:45.685978 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:16:45.685981 | orchestrator | 2026-03-01 01:16:45.685985 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-01 01:16:45.685989 | orchestrator | 2026-03-01 01:16:45.685992 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-01 01:16:45.685996 | orchestrator | Sunday 01 March 2026 01:16:38 +0000 (0:00:01.110) 0:08:23.724 ********** 2026-03-01 01:16:45.686076 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.686088 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.686092 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.686096 | orchestrator | 2026-03-01 01:16:45.686099 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-01 01:16:45.686103 | orchestrator | 2026-03-01 01:16:45.686107 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-01 01:16:45.686111 | orchestrator | Sunday 01 March 2026 01:16:39 +0000 (0:00:00.533) 0:08:24.258 ********** 2026-03-01 01:16:45.686114 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-01 01:16:45.686118 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-01 01:16:45.686122 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-01 01:16:45.686125 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-01 01:16:45.686129 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-01 01:16:45.686133 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-01 01:16:45.686139 | orchestrator | skipping: [testbed-node-3] 2026-03-01 01:16:45.686143 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-01 01:16:45.686147 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-01 01:16:45.686150 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-01 01:16:45.686154 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-01 01:16:45.686158 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-01 01:16:45.686162 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-01 01:16:45.686165 | orchestrator | skipping: [testbed-node-4] 2026-03-01 01:16:45.686169 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-01 01:16:45.686173 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-01 01:16:45.686176 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-01 01:16:45.686180 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-01 01:16:45.686184 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-01 01:16:45.686188 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-01 01:16:45.686192 | orchestrator | skipping: [testbed-node-5] 2026-03-01 01:16:45.686195 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-01 01:16:45.686199 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-01 01:16:45.686203 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-01 01:16:45.686206 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-01 01:16:45.686210 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-01 01:16:45.686214 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-01 01:16:45.686217 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-01 01:16:45.686221 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-01 01:16:45.686225 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-01 01:16:45.686228 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-01 01:16:45.686232 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-01 01:16:45.686236 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-01 01:16:45.686240 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.686243 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.686247 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-01 01:16:45.686251 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-01 01:16:45.686255 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-01 01:16:45.686258 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-01 01:16:45.686265 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-01 01:16:45.686269 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-01 01:16:45.686273 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.686276 | orchestrator | 2026-03-01 01:16:45.686280 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-01 01:16:45.686284 | orchestrator | 2026-03-01 01:16:45.686288 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-01 01:16:45.686292 | orchestrator | Sunday 01 March 2026 01:16:40 +0000 (0:00:01.559) 0:08:25.817 ********** 2026-03-01 01:16:45.686296 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-01 01:16:45.686300 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-01 01:16:45.686303 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.686307 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-01 01:16:45.686311 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-01 01:16:45.686315 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.686319 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-01 01:16:45.686322 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-01 01:16:45.686326 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.686330 | orchestrator | 2026-03-01 01:16:45.686334 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-01 01:16:45.686338 | orchestrator | 2026-03-01 01:16:45.686341 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-01 01:16:45.686347 | orchestrator | Sunday 01 March 2026 01:16:41 +0000 (0:00:00.744) 0:08:26.562 ********** 2026-03-01 01:16:45.686353 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.686361 | orchestrator | 2026-03-01 01:16:45.686370 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-01 01:16:45.686376 | orchestrator | 2026-03-01 01:16:45.686386 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-01 01:16:45.686393 | orchestrator | Sunday 01 March 2026 01:16:42 +0000 (0:00:00.669) 0:08:27.231 ********** 2026-03-01 01:16:45.686399 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:16:45.686404 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:16:45.686410 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:16:45.686415 | orchestrator | 2026-03-01 01:16:45.686421 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:16:45.686426 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:16:45.686432 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-01 01:16:45.686441 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-01 01:16:45.686448 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-01 01:16:45.686454 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-01 01:16:45.686460 | orchestrator | testbed-node-4 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-01 01:16:45.686466 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-01 01:16:45.686472 | orchestrator | 2026-03-01 01:16:45.686478 | orchestrator | 2026-03-01 01:16:45.686485 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:16:45.686495 | orchestrator | Sunday 01 March 2026 01:16:42 +0000 (0:00:00.593) 0:08:27.825 ********** 2026-03-01 01:16:45.686500 | orchestrator | =============================================================================== 2026-03-01 01:16:45.686503 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.83s 2026-03-01 01:16:45.686509 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.65s 2026-03-01 01:16:45.686516 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.82s 2026-03-01 01:16:45.686521 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.15s 2026-03-01 01:16:45.686526 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.52s 2026-03-01 01:16:45.686533 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.33s 2026-03-01 01:16:45.686540 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.48s 2026-03-01 01:16:45.686546 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.50s 2026-03-01 01:16:45.686553 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.33s 2026-03-01 01:16:45.686561 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.64s 2026-03-01 01:16:45.686570 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.14s 2026-03-01 01:16:45.686576 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.02s 2026-03-01 01:16:45.686582 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.82s 2026-03-01 01:16:45.686588 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.97s 2026-03-01 01:16:45.686593 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.75s 2026-03-01 01:16:45.686599 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.31s 2026-03-01 01:16:45.686605 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.66s 2026-03-01 01:16:45.686611 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.93s 2026-03-01 01:16:45.686618 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 6.74s 2026-03-01 01:16:45.686625 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.55s 2026-03-01 01:16:45.686631 | orchestrator | 2026-03-01 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:48.715911 | orchestrator | 2026-03-01 01:16:48 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:48.716018 | orchestrator | 2026-03-01 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:51.763307 | orchestrator | 2026-03-01 01:16:51 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:51.763376 | orchestrator | 2026-03-01 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:54.806889 | orchestrator | 2026-03-01 01:16:54 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:54.806961 | orchestrator | 2026-03-01 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:16:57.845949 | orchestrator | 2026-03-01 01:16:57 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:16:57.846004 | orchestrator | 2026-03-01 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:00.892074 | orchestrator | 2026-03-01 01:17:00 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:17:00.892161 | orchestrator | 2026-03-01 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:03.937970 | orchestrator | 2026-03-01 01:17:03 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:17:03.938101 | orchestrator | 2026-03-01 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:06.987460 | orchestrator | 2026-03-01 01:17:06 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:17:06.987562 | orchestrator | 2026-03-01 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:10.051974 | orchestrator | 2026-03-01 01:17:10 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:17:10.052096 | orchestrator | 2026-03-01 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:13.091378 | orchestrator | 2026-03-01 01:17:13 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:17:13.091449 | orchestrator | 2026-03-01 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:16.139852 | orchestrator | 2026-03-01 01:17:16 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state STARTED 2026-03-01 01:17:16.139960 | orchestrator | 2026-03-01 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-03-01 01:17:19.192215 | orchestrator | 2026-03-01 01:17:19 | INFO  | Task f29420d2-649f-45f3-b015-95453c3b2780 is in state SUCCESS 2026-03-01 01:17:19.192302 | orchestrator | 2026-03-01 01:17:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:19.194874 | orchestrator | 2026-03-01 01:17:19.194987 | orchestrator | 2026-03-01 01:17:19.195023 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-01 01:17:19.195029 | orchestrator | 2026-03-01 01:17:19.195033 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-01 01:17:19.195038 | orchestrator | Sunday 01 March 2026 01:12:34 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-03-01 01:17:19.195042 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.195047 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:17:19.195051 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:17:19.195055 | orchestrator | 2026-03-01 01:17:19.195105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-01 01:17:19.195110 | orchestrator | Sunday 01 March 2026 01:12:34 +0000 (0:00:00.298) 0:00:00.558 ********** 2026-03-01 01:17:19.195115 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-01 01:17:19.195120 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-01 01:17:19.195126 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-01 01:17:19.195161 | orchestrator | 2026-03-01 01:17:19.195169 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-01 01:17:19.195175 | orchestrator | 2026-03-01 01:17:19.195181 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-01 01:17:19.195188 | orchestrator | Sunday 01 March 2026 01:12:35 +0000 (0:00:00.469) 0:00:01.027 ********** 2026-03-01 01:17:19.195195 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:17:19.195203 | orchestrator | 2026-03-01 01:17:19.195210 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-01 01:17:19.195216 | orchestrator | Sunday 01 March 2026 01:12:35 +0000 (0:00:00.571) 0:00:01.598 ********** 2026-03-01 01:17:19.195223 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-01 01:17:19.195230 | orchestrator | 2026-03-01 01:17:19.195235 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-01 01:17:19.195239 | orchestrator | Sunday 01 March 2026 01:12:38 +0000 (0:00:03.145) 0:00:04.744 ********** 2026-03-01 01:17:19.195246 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-01 01:17:19.195252 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-01 01:17:19.195258 | orchestrator | 2026-03-01 01:17:19.195268 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-01 01:17:19.195299 | orchestrator | Sunday 01 March 2026 01:12:45 +0000 (0:00:06.630) 0:00:11.375 ********** 2026-03-01 01:17:19.195305 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-01 01:17:19.195311 | orchestrator | 2026-03-01 01:17:19.195317 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-01 01:17:19.195323 | orchestrator | Sunday 01 March 2026 01:12:48 +0000 (0:00:02.781) 0:00:14.157 ********** 2026-03-01 01:17:19.195329 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-01 01:17:19.195682 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-01 01:17:19.195696 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-01 01:17:19.195700 | orchestrator | 2026-03-01 01:17:19.195705 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-01 01:17:19.195710 | orchestrator | Sunday 01 March 2026 01:12:55 +0000 (0:00:07.358) 0:00:21.515 ********** 2026-03-01 01:17:19.195715 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-01 01:17:19.195720 | orchestrator | 2026-03-01 01:17:19.195725 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-01 01:17:19.195730 | orchestrator | Sunday 01 March 2026 01:12:59 +0000 (0:00:04.101) 0:00:25.616 ********** 2026-03-01 01:17:19.195734 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-01 01:17:19.195739 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-01 01:17:19.195743 | orchestrator | 2026-03-01 01:17:19.195748 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-01 01:17:19.195753 | orchestrator | Sunday 01 March 2026 01:13:06 +0000 (0:00:07.082) 0:00:32.698 ********** 2026-03-01 01:17:19.195758 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-01 01:17:19.195762 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-01 01:17:19.195766 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-01 01:17:19.195770 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-01 01:17:19.195786 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-01 01:17:19.195791 | orchestrator | 2026-03-01 01:17:19.195796 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-01 01:17:19.195800 | orchestrator | Sunday 01 March 2026 01:13:21 +0000 (0:00:15.120) 0:00:47.819 ********** 2026-03-01 01:17:19.195805 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:17:19.195810 | orchestrator | 2026-03-01 01:17:19.195814 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-01 01:17:19.195818 | orchestrator | Sunday 01 March 2026 01:13:22 +0000 (0:00:00.553) 0:00:48.373 ********** 2026-03-01 01:17:19.195822 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.195826 | orchestrator | 2026-03-01 01:17:19.195830 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-01 01:17:19.195834 | orchestrator | Sunday 01 March 2026 01:13:27 +0000 (0:00:04.790) 0:00:53.163 ********** 2026-03-01 01:17:19.195837 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.195841 | orchestrator | 2026-03-01 01:17:19.195845 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-01 01:17:19.195859 | orchestrator | Sunday 01 March 2026 01:13:31 +0000 (0:00:04.388) 0:00:57.552 ********** 2026-03-01 01:17:19.195863 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.195867 | orchestrator | 2026-03-01 01:17:19.195871 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-01 01:17:19.195875 | orchestrator | Sunday 01 March 2026 01:13:34 +0000 (0:00:03.181) 0:01:00.733 ********** 2026-03-01 01:17:19.195879 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-01 01:17:19.195883 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-01 01:17:19.195886 | orchestrator | 2026-03-01 01:17:19.195901 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-01 01:17:19.195904 | orchestrator | Sunday 01 March 2026 01:13:45 +0000 (0:00:10.226) 0:01:10.960 ********** 2026-03-01 01:17:19.195908 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-01 01:17:19.195912 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-01 01:17:19.195919 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-01 01:17:19.195924 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-01 01:17:19.195928 | orchestrator | 2026-03-01 01:17:19.195931 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-01 01:17:19.195935 | orchestrator | Sunday 01 March 2026 01:13:59 +0000 (0:00:14.830) 0:01:25.790 ********** 2026-03-01 01:17:19.195939 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.195943 | orchestrator | 2026-03-01 01:17:19.195946 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-01 01:17:19.195950 | orchestrator | Sunday 01 March 2026 01:14:04 +0000 (0:00:04.451) 0:01:30.242 ********** 2026-03-01 01:17:19.195954 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.195958 | orchestrator | 2026-03-01 01:17:19.195962 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-01 01:17:19.195965 | orchestrator | Sunday 01 March 2026 01:14:09 +0000 (0:00:05.042) 0:01:35.284 ********** 2026-03-01 01:17:19.195969 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.195973 | orchestrator | 2026-03-01 01:17:19.195977 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-01 01:17:19.195981 | orchestrator | Sunday 01 March 2026 01:14:09 +0000 (0:00:00.192) 0:01:35.477 ********** 2026-03-01 01:17:19.195985 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.195988 | orchestrator | 2026-03-01 01:17:19.195992 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-01 01:17:19.195996 | orchestrator | Sunday 01 March 2026 01:14:14 +0000 (0:00:04.484) 0:01:39.962 ********** 2026-03-01 01:17:19.196000 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:17:19.196345 | orchestrator | 2026-03-01 01:17:19.196355 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-01 01:17:19.196362 | orchestrator | Sunday 01 March 2026 01:14:15 +0000 (0:00:00.913) 0:01:40.875 ********** 2026-03-01 01:17:19.196368 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196377 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196384 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196389 | orchestrator | 2026-03-01 01:17:19.196396 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-01 01:17:19.196402 | orchestrator | Sunday 01 March 2026 01:14:19 +0000 (0:00:04.474) 0:01:45.350 ********** 2026-03-01 01:17:19.196408 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196415 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196419 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196425 | orchestrator | 2026-03-01 01:17:19.196431 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-01 01:17:19.196437 | orchestrator | Sunday 01 March 2026 01:14:23 +0000 (0:00:04.350) 0:01:49.700 ********** 2026-03-01 01:17:19.196444 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196449 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196458 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196465 | orchestrator | 2026-03-01 01:17:19.196473 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-01 01:17:19.196480 | orchestrator | Sunday 01 March 2026 01:14:24 +0000 (0:00:00.754) 0:01:50.455 ********** 2026-03-01 01:17:19.196496 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196510 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:17:19.196517 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:17:19.196523 | orchestrator | 2026-03-01 01:17:19.196529 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-01 01:17:19.196535 | orchestrator | Sunday 01 March 2026 01:14:26 +0000 (0:00:01.750) 0:01:52.205 ********** 2026-03-01 01:17:19.196540 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196546 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196552 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196557 | orchestrator | 2026-03-01 01:17:19.196564 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-01 01:17:19.196570 | orchestrator | Sunday 01 March 2026 01:14:27 +0000 (0:00:01.202) 0:01:53.408 ********** 2026-03-01 01:17:19.196576 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196582 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196588 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196594 | orchestrator | 2026-03-01 01:17:19.196601 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-01 01:17:19.196645 | orchestrator | Sunday 01 March 2026 01:14:28 +0000 (0:00:01.064) 0:01:54.472 ********** 2026-03-01 01:17:19.196658 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196664 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196670 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196676 | orchestrator | 2026-03-01 01:17:19.196718 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-01 01:17:19.196724 | orchestrator | Sunday 01 March 2026 01:14:30 +0000 (0:00:01.873) 0:01:56.345 ********** 2026-03-01 01:17:19.196728 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.196732 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.196735 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.196739 | orchestrator | 2026-03-01 01:17:19.196743 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-01 01:17:19.196747 | orchestrator | Sunday 01 March 2026 01:14:32 +0000 (0:00:01.625) 0:01:57.970 ********** 2026-03-01 01:17:19.196751 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196755 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:17:19.196758 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:17:19.196762 | orchestrator | 2026-03-01 01:17:19.196766 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-01 01:17:19.196770 | orchestrator | Sunday 01 March 2026 01:14:32 +0000 (0:00:00.605) 0:01:58.576 ********** 2026-03-01 01:17:19.196774 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196778 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:17:19.196782 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:17:19.196785 | orchestrator | 2026-03-01 01:17:19.196789 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-01 01:17:19.196793 | orchestrator | Sunday 01 March 2026 01:14:35 +0000 (0:00:02.625) 0:02:01.201 ********** 2026-03-01 01:17:19.196797 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:17:19.196801 | orchestrator | 2026-03-01 01:17:19.196805 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-01 01:17:19.196809 | orchestrator | Sunday 01 March 2026 01:14:36 +0000 (0:00:00.682) 0:02:01.884 ********** 2026-03-01 01:17:19.196813 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196816 | orchestrator | 2026-03-01 01:17:19.196820 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-01 01:17:19.196824 | orchestrator | Sunday 01 March 2026 01:14:40 +0000 (0:00:04.036) 0:02:05.920 ********** 2026-03-01 01:17:19.196828 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196831 | orchestrator | 2026-03-01 01:17:19.196835 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-01 01:17:19.196839 | orchestrator | Sunday 01 March 2026 01:14:43 +0000 (0:00:03.489) 0:02:09.410 ********** 2026-03-01 01:17:19.196850 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-01 01:17:19.196854 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-01 01:17:19.196858 | orchestrator | 2026-03-01 01:17:19.196862 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-01 01:17:19.196866 | orchestrator | Sunday 01 March 2026 01:14:50 +0000 (0:00:07.254) 0:02:16.665 ********** 2026-03-01 01:17:19.196870 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196873 | orchestrator | 2026-03-01 01:17:19.196877 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-01 01:17:19.196881 | orchestrator | Sunday 01 March 2026 01:14:54 +0000 (0:00:03.238) 0:02:19.904 ********** 2026-03-01 01:17:19.196885 | orchestrator | ok: [testbed-node-0] 2026-03-01 01:17:19.196889 | orchestrator | ok: [testbed-node-1] 2026-03-01 01:17:19.196892 | orchestrator | ok: [testbed-node-2] 2026-03-01 01:17:19.196896 | orchestrator | 2026-03-01 01:17:19.196900 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-01 01:17:19.196904 | orchestrator | Sunday 01 March 2026 01:14:54 +0000 (0:00:00.307) 0:02:20.211 ********** 2026-03-01 01:17:19.196917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.196940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.196948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.196960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.196968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.196974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.196982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.196994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197084 | orchestrator | 2026-03-01 01:17:19.197088 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-01 01:17:19.197093 | orchestrator | Sunday 01 March 2026 01:14:56 +0000 (0:00:02.428) 0:02:22.639 ********** 2026-03-01 01:17:19.197097 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.197102 | orchestrator | 2026-03-01 01:17:19.197117 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-01 01:17:19.197123 | orchestrator | Sunday 01 March 2026 01:14:56 +0000 (0:00:00.137) 0:02:22.777 ********** 2026-03-01 01:17:19.197128 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.197132 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:17:19.197136 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:17:19.197140 | orchestrator | 2026-03-01 01:17:19.197145 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-01 01:17:19.197153 | orchestrator | Sunday 01 March 2026 01:14:57 +0000 (0:00:00.461) 0:02:23.239 ********** 2026-03-01 01:17:19.197158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197184 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.197214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197242 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:17:19.197253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197294 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:17:19.197299 | orchestrator | 2026-03-01 01:17:19.197303 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-01 01:17:19.197308 | orchestrator | Sunday 01 March 2026 01:14:58 +0000 (0:00:00.679) 0:02:23.918 ********** 2026-03-01 01:17:19.197312 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-01 01:17:19.197316 | orchestrator | 2026-03-01 01:17:19.197320 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-01 01:17:19.197324 | orchestrator | Sunday 01 March 2026 01:14:58 +0000 (0:00:00.569) 0:02:24.487 ********** 2026-03-01 01:17:19.197331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197453 | orchestrator | 2026-03-01 01:17:19.197457 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-01 01:17:19.197461 | orchestrator | Sunday 01 March 2026 01:15:03 +0000 (0:00:05.117) 0:02:29.605 ********** 2026-03-01 01:17:19.197465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197490 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.197498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197524 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:17:19.197534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197569 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:17:19.197573 | orchestrator | 2026-03-01 01:17:19.197577 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-01 01:17:19.197581 | orchestrator | Sunday 01 March 2026 01:15:04 +0000 (0:00:00.755) 0:02:30.360 ********** 2026-03-01 01:17:19.197585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197616 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.197623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197662 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:17:19.197666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-01 01:17:19.197670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-01 01:17:19.197674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-01 01:17:19.197688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-01 01:17:19.197692 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:17:19.197696 | orchestrator | 2026-03-01 01:17:19.197699 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-01 01:17:19.197703 | orchestrator | Sunday 01 March 2026 01:15:05 +0000 (0:00:00.891) 0:02:31.252 ********** 2026-03-01 01:17:19.197712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197803 | orchestrator | 2026-03-01 01:17:19.197807 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-01 01:17:19.197814 | orchestrator | Sunday 01 March 2026 01:15:10 +0000 (0:00:04.752) 0:02:36.004 ********** 2026-03-01 01:17:19.197818 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-01 01:17:19.197823 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-01 01:17:19.197827 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-01 01:17:19.197831 | orchestrator | 2026-03-01 01:17:19.197835 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-01 01:17:19.197838 | orchestrator | Sunday 01 March 2026 01:15:13 +0000 (0:00:02.868) 0:02:38.873 ********** 2026-03-01 01:17:19.197842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.197861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.197875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.197922 | orchestrator | 2026-03-01 01:17:19.197926 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-01 01:17:19.197930 | orchestrator | Sunday 01 March 2026 01:15:29 +0000 (0:00:16.362) 0:02:55.235 ********** 2026-03-01 01:17:19.197934 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.197938 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.197941 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.197945 | orchestrator | 2026-03-01 01:17:19.197949 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-01 01:17:19.197952 | orchestrator | Sunday 01 March 2026 01:15:30 +0000 (0:00:01.444) 0:02:56.680 ********** 2026-03-01 01:17:19.197956 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.197960 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.197966 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.197969 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.197973 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.197977 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.197981 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.197988 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.197991 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.197995 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-01 01:17:19.197999 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198064 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198069 | orchestrator | 2026-03-01 01:17:19.198072 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-01 01:17:19.198076 | orchestrator | Sunday 01 March 2026 01:15:37 +0000 (0:00:06.205) 0:03:02.885 ********** 2026-03-01 01:17:19.198080 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.198083 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.198087 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.198091 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.198095 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.198098 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.198102 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.198106 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.198110 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.198113 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198117 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198121 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198125 | orchestrator | 2026-03-01 01:17:19.198129 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-01 01:17:19.198133 | orchestrator | Sunday 01 March 2026 01:15:41 +0000 (0:00:04.933) 0:03:07.819 ********** 2026-03-01 01:17:19.198136 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.198140 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.198144 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-01 01:17:19.198148 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.198151 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.198155 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-01 01:17:19.198160 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.198163 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.198167 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-01 01:17:19.198171 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198174 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198178 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-01 01:17:19.198182 | orchestrator | 2026-03-01 01:17:19.198186 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-01 01:17:19.198189 | orchestrator | Sunday 01 March 2026 01:15:46 +0000 (0:00:04.710) 0:03:12.529 ********** 2026-03-01 01:17:19.198196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.198210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.198214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-01 01:17:19.198218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.198222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.198226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-01 01:17:19.198232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-01 01:17:19.198284 | orchestrator | 2026-03-01 01:17:19.198288 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-01 01:17:19.198291 | orchestrator | Sunday 01 March 2026 01:15:50 +0000 (0:00:03.628) 0:03:16.157 ********** 2026-03-01 01:17:19.198295 | orchestrator | skipping: [testbed-node-0] 2026-03-01 01:17:19.198299 | orchestrator | skipping: [testbed-node-1] 2026-03-01 01:17:19.198303 | orchestrator | skipping: [testbed-node-2] 2026-03-01 01:17:19.198306 | orchestrator | 2026-03-01 01:17:19.198310 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-01 01:17:19.198314 | orchestrator | Sunday 01 March 2026 01:15:50 +0000 (0:00:00.363) 0:03:16.521 ********** 2026-03-01 01:17:19.198318 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198322 | orchestrator | 2026-03-01 01:17:19.198326 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-01 01:17:19.198330 | orchestrator | Sunday 01 March 2026 01:15:52 +0000 (0:00:02.270) 0:03:18.791 ********** 2026-03-01 01:17:19.198334 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198338 | orchestrator | 2026-03-01 01:17:19.198342 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-01 01:17:19.198345 | orchestrator | Sunday 01 March 2026 01:15:55 +0000 (0:00:02.505) 0:03:21.297 ********** 2026-03-01 01:17:19.198349 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198353 | orchestrator | 2026-03-01 01:17:19.198357 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-01 01:17:19.198360 | orchestrator | Sunday 01 March 2026 01:15:58 +0000 (0:00:02.665) 0:03:23.962 ********** 2026-03-01 01:17:19.198364 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198368 | orchestrator | 2026-03-01 01:17:19.198372 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-01 01:17:19.198375 | orchestrator | Sunday 01 March 2026 01:16:01 +0000 (0:00:03.008) 0:03:26.971 ********** 2026-03-01 01:17:19.198380 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198386 | orchestrator | 2026-03-01 01:17:19.198392 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-01 01:17:19.198398 | orchestrator | Sunday 01 March 2026 01:16:21 +0000 (0:00:20.633) 0:03:47.604 ********** 2026-03-01 01:17:19.198403 | orchestrator | 2026-03-01 01:17:19.198412 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-01 01:17:19.198422 | orchestrator | Sunday 01 March 2026 01:16:21 +0000 (0:00:00.076) 0:03:47.681 ********** 2026-03-01 01:17:19.198428 | orchestrator | 2026-03-01 01:17:19.198434 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-01 01:17:19.198447 | orchestrator | Sunday 01 March 2026 01:16:21 +0000 (0:00:00.066) 0:03:47.747 ********** 2026-03-01 01:17:19.198453 | orchestrator | 2026-03-01 01:17:19.198459 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-01 01:17:19.198465 | orchestrator | Sunday 01 March 2026 01:16:21 +0000 (0:00:00.070) 0:03:47.817 ********** 2026-03-01 01:17:19.198471 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198477 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.198483 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.198491 | orchestrator | 2026-03-01 01:17:19.198497 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-01 01:17:19.198503 | orchestrator | Sunday 01 March 2026 01:16:38 +0000 (0:00:16.814) 0:04:04.631 ********** 2026-03-01 01:17:19.198508 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.198514 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.198520 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198527 | orchestrator | 2026-03-01 01:17:19.198533 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-01 01:17:19.198540 | orchestrator | Sunday 01 March 2026 01:16:46 +0000 (0:00:07.950) 0:04:12.582 ********** 2026-03-01 01:17:19.198546 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198552 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.198558 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.198564 | orchestrator | 2026-03-01 01:17:19.198570 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-01 01:17:19.198578 | orchestrator | Sunday 01 March 2026 01:16:57 +0000 (0:00:10.490) 0:04:23.073 ********** 2026-03-01 01:17:19.198582 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198586 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.198590 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.198593 | orchestrator | 2026-03-01 01:17:19.198598 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-01 01:17:19.198605 | orchestrator | Sunday 01 March 2026 01:17:07 +0000 (0:00:10.234) 0:04:33.307 ********** 2026-03-01 01:17:19.198610 | orchestrator | changed: [testbed-node-0] 2026-03-01 01:17:19.198614 | orchestrator | changed: [testbed-node-2] 2026-03-01 01:17:19.198617 | orchestrator | changed: [testbed-node-1] 2026-03-01 01:17:19.198621 | orchestrator | 2026-03-01 01:17:19.198625 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:17:19.198629 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-01 01:17:19.198634 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:17:19.198638 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-01 01:17:19.198642 | orchestrator | 2026-03-01 01:17:19.198646 | orchestrator | 2026-03-01 01:17:19.198650 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:17:19.198654 | orchestrator | Sunday 01 March 2026 01:17:17 +0000 (0:00:10.007) 0:04:43.315 ********** 2026-03-01 01:17:19.198661 | orchestrator | =============================================================================== 2026-03-01 01:17:19.198665 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.63s 2026-03-01 01:17:19.198669 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.81s 2026-03-01 01:17:19.198673 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.36s 2026-03-01 01:17:19.198677 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.12s 2026-03-01 01:17:19.198681 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.83s 2026-03-01 01:17:19.198684 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.49s 2026-03-01 01:17:19.198692 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.24s 2026-03-01 01:17:19.198696 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.23s 2026-03-01 01:17:19.198700 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.01s 2026-03-01 01:17:19.198703 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.95s 2026-03-01 01:17:19.198707 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.36s 2026-03-01 01:17:19.198711 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.25s 2026-03-01 01:17:19.198715 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.08s 2026-03-01 01:17:19.198719 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.63s 2026-03-01 01:17:19.198723 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.21s 2026-03-01 01:17:19.198727 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.12s 2026-03-01 01:17:19.198730 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.04s 2026-03-01 01:17:19.198734 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.93s 2026-03-01 01:17:19.198738 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 4.79s 2026-03-01 01:17:19.198742 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.75s 2026-03-01 01:17:22.234512 | orchestrator | 2026-03-01 01:17:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:25.273777 | orchestrator | 2026-03-01 01:17:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:28.314486 | orchestrator | 2026-03-01 01:17:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:31.350526 | orchestrator | 2026-03-01 01:17:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:34.392665 | orchestrator | 2026-03-01 01:17:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:37.427820 | orchestrator | 2026-03-01 01:17:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:40.471528 | orchestrator | 2026-03-01 01:17:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:43.515568 | orchestrator | 2026-03-01 01:17:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:46.559595 | orchestrator | 2026-03-01 01:17:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:49.605337 | orchestrator | 2026-03-01 01:17:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:52.647592 | orchestrator | 2026-03-01 01:17:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:55.691413 | orchestrator | 2026-03-01 01:17:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:17:58.733899 | orchestrator | 2026-03-01 01:17:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:01.778406 | orchestrator | 2026-03-01 01:18:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:04.822460 | orchestrator | 2026-03-01 01:18:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:07.865251 | orchestrator | 2026-03-01 01:18:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:10.907929 | orchestrator | 2026-03-01 01:18:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:13.950202 | orchestrator | 2026-03-01 01:18:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:16.993532 | orchestrator | 2026-03-01 01:18:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-01 01:18:20.036032 | orchestrator | 2026-03-01 01:18:20.379262 | orchestrator | 2026-03-01 01:18:20.386204 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Mar 1 01:18:20 UTC 2026 2026-03-01 01:18:20.386300 | orchestrator | 2026-03-01 01:18:20.805854 | orchestrator | ok: Runtime: 0:34:04.253139 2026-03-01 01:18:21.126439 | 2026-03-01 01:18:21.126583 | TASK [Bootstrap services] 2026-03-01 01:18:21.862686 | orchestrator | 2026-03-01 01:18:21.862839 | orchestrator | # BOOTSTRAP 2026-03-01 01:18:21.862855 | orchestrator | 2026-03-01 01:18:21.862865 | orchestrator | + set -e 2026-03-01 01:18:21.862873 | orchestrator | + echo 2026-03-01 01:18:21.862882 | orchestrator | + echo '# BOOTSTRAP' 2026-03-01 01:18:21.862893 | orchestrator | + echo 2026-03-01 01:18:21.862923 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-01 01:18:21.871747 | orchestrator | + set -e 2026-03-01 01:18:21.871827 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-01 01:18:26.160333 | orchestrator | 2026-03-01 01:18:26 | INFO  | It takes a moment until task 479a03c1-daa0-4c8a-9c29-a672968f90b2 (flavor-manager) has been started and output is visible here. 2026-03-01 01:18:34.447324 | orchestrator | 2026-03-01 01:18:29 | INFO  | Flavor SCS-1L-1 created 2026-03-01 01:18:34.447413 | orchestrator | 2026-03-01 01:18:29 | INFO  | Flavor SCS-1L-1-5 created 2026-03-01 01:18:34.447422 | orchestrator | 2026-03-01 01:18:29 | INFO  | Flavor SCS-1V-2 created 2026-03-01 01:18:34.447427 | orchestrator | 2026-03-01 01:18:29 | INFO  | Flavor SCS-1V-2-5 created 2026-03-01 01:18:34.447431 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-1V-4 created 2026-03-01 01:18:34.447436 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-1V-4-10 created 2026-03-01 01:18:34.447441 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-1V-8 created 2026-03-01 01:18:34.447445 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-1V-8-20 created 2026-03-01 01:18:34.447459 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-2V-4 created 2026-03-01 01:18:34.447463 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-2V-4-10 created 2026-03-01 01:18:34.447467 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-2V-8 created 2026-03-01 01:18:34.447471 | orchestrator | 2026-03-01 01:18:30 | INFO  | Flavor SCS-2V-8-20 created 2026-03-01 01:18:34.447475 | orchestrator | 2026-03-01 01:18:31 | INFO  | Flavor SCS-2V-16 created 2026-03-01 01:18:34.447479 | orchestrator | 2026-03-01 01:18:31 | INFO  | Flavor SCS-2V-16-50 created 2026-03-01 01:18:34.447483 | orchestrator | 2026-03-01 01:18:31 | INFO  | Flavor SCS-4V-8 created 2026-03-01 01:18:34.447487 | orchestrator | 2026-03-01 01:18:32 | INFO  | Flavor SCS-4V-8-20 created 2026-03-01 01:18:34.447491 | orchestrator | 2026-03-01 01:18:32 | INFO  | Flavor SCS-4V-16 created 2026-03-01 01:18:34.447494 | orchestrator | 2026-03-01 01:18:32 | INFO  | Flavor SCS-4V-16-50 created 2026-03-01 01:18:34.447498 | orchestrator | 2026-03-01 01:18:32 | INFO  | Flavor SCS-4V-32 created 2026-03-01 01:18:34.447502 | orchestrator | 2026-03-01 01:18:32 | INFO  | Flavor SCS-4V-32-100 created 2026-03-01 01:18:34.447506 | orchestrator | 2026-03-01 01:18:32 | INFO  | Flavor SCS-8V-16 created 2026-03-01 01:18:34.447510 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-8V-16-50 created 2026-03-01 01:18:34.447514 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-8V-32 created 2026-03-01 01:18:34.447518 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-8V-32-100 created 2026-03-01 01:18:34.447522 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-16V-32 created 2026-03-01 01:18:34.447526 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-16V-32-100 created 2026-03-01 01:18:34.447530 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-2V-4-20s created 2026-03-01 01:18:34.447533 | orchestrator | 2026-03-01 01:18:33 | INFO  | Flavor SCS-4V-8-50s created 2026-03-01 01:18:34.447537 | orchestrator | 2026-03-01 01:18:34 | INFO  | Flavor SCS-4V-16-100s created 2026-03-01 01:18:34.447541 | orchestrator | 2026-03-01 01:18:34 | INFO  | Flavor SCS-8V-32-100s created 2026-03-01 01:18:36.731408 | orchestrator | 2026-03-01 01:18:36 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-01 01:18:46.819116 | orchestrator | 2026-03-01 01:18:46 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-01 01:18:46.889025 | orchestrator | 2026-03-01 01:18:46 | INFO  | Task aabee416-9e6c-4223-ae90-277ed5cc18ea (bootstrap-basic) was prepared for execution. 2026-03-01 01:18:46.889123 | orchestrator | 2026-03-01 01:18:46 | INFO  | It takes a moment until task aabee416-9e6c-4223-ae90-277ed5cc18ea (bootstrap-basic) has been started and output is visible here. 2026-03-01 01:19:36.096041 | orchestrator | 2026-03-01 01:19:36.096162 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-01 01:19:36.096179 | orchestrator | 2026-03-01 01:19:36.096191 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-01 01:19:36.096203 | orchestrator | Sunday 01 March 2026 01:18:51 +0000 (0:00:00.074) 0:00:00.074 ********** 2026-03-01 01:19:36.096214 | orchestrator | ok: [localhost] 2026-03-01 01:19:36.096226 | orchestrator | 2026-03-01 01:19:36.096238 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-01 01:19:36.096249 | orchestrator | Sunday 01 March 2026 01:18:53 +0000 (0:00:01.971) 0:00:02.045 ********** 2026-03-01 01:19:36.096262 | orchestrator | ok: [localhost] 2026-03-01 01:19:36.096273 | orchestrator | 2026-03-01 01:19:36.096285 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-01 01:19:36.096296 | orchestrator | Sunday 01 March 2026 01:19:02 +0000 (0:00:09.267) 0:00:11.313 ********** 2026-03-01 01:19:36.096307 | orchestrator | changed: [localhost] 2026-03-01 01:19:36.096318 | orchestrator | 2026-03-01 01:19:36.096330 | orchestrator | TASK [Create public network] *************************************************** 2026-03-01 01:19:36.096341 | orchestrator | Sunday 01 March 2026 01:19:10 +0000 (0:00:08.320) 0:00:19.634 ********** 2026-03-01 01:19:36.096352 | orchestrator | changed: [localhost] 2026-03-01 01:19:36.096363 | orchestrator | 2026-03-01 01:19:36.096379 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-01 01:19:36.096391 | orchestrator | Sunday 01 March 2026 01:19:16 +0000 (0:00:05.672) 0:00:25.306 ********** 2026-03-01 01:19:36.096402 | orchestrator | changed: [localhost] 2026-03-01 01:19:36.096413 | orchestrator | 2026-03-01 01:19:36.096480 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-01 01:19:36.096494 | orchestrator | Sunday 01 March 2026 01:19:23 +0000 (0:00:06.736) 0:00:32.043 ********** 2026-03-01 01:19:36.096508 | orchestrator | changed: [localhost] 2026-03-01 01:19:36.096521 | orchestrator | 2026-03-01 01:19:36.096535 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-01 01:19:36.096548 | orchestrator | Sunday 01 March 2026 01:19:27 +0000 (0:00:04.573) 0:00:36.616 ********** 2026-03-01 01:19:36.096561 | orchestrator | changed: [localhost] 2026-03-01 01:19:36.096574 | orchestrator | 2026-03-01 01:19:36.096588 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-01 01:19:36.096613 | orchestrator | Sunday 01 March 2026 01:19:32 +0000 (0:00:04.227) 0:00:40.843 ********** 2026-03-01 01:19:36.096627 | orchestrator | ok: [localhost] 2026-03-01 01:19:36.096640 | orchestrator | 2026-03-01 01:19:36.096652 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-01 01:19:36.096666 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-01 01:19:36.096681 | orchestrator | 2026-03-01 01:19:36.096693 | orchestrator | 2026-03-01 01:19:36.096706 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-01 01:19:36.096720 | orchestrator | Sunday 01 March 2026 01:19:35 +0000 (0:00:03.799) 0:00:44.643 ********** 2026-03-01 01:19:36.096733 | orchestrator | =============================================================================== 2026-03-01 01:19:36.096746 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.27s 2026-03-01 01:19:36.096785 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.32s 2026-03-01 01:19:36.096797 | orchestrator | Set public network to default ------------------------------------------- 6.74s 2026-03-01 01:19:36.096808 | orchestrator | Create public network --------------------------------------------------- 5.67s 2026-03-01 01:19:36.096819 | orchestrator | Create public subnet ---------------------------------------------------- 4.57s 2026-03-01 01:19:36.096831 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.23s 2026-03-01 01:19:36.096842 | orchestrator | Create manager role ----------------------------------------------------- 3.80s 2026-03-01 01:19:36.096853 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2026-03-01 01:19:38.636075 | orchestrator | 2026-03-01 01:19:38 | INFO  | It takes a moment until task 3713b7f6-8fd1-4b40-a225-90dd34fb01d1 (image-manager) has been started and output is visible here. 2026-03-01 01:20:21.103234 | orchestrator | 2026-03-01 01:19:41 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-01 01:20:21.103331 | orchestrator | 2026-03-01 01:19:41 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-01 01:20:21.103343 | orchestrator | 2026-03-01 01:19:41 | INFO  | Importing image Cirros 0.6.2 2026-03-01 01:20:21.103351 | orchestrator | 2026-03-01 01:19:41 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-01 01:20:21.103359 | orchestrator | 2026-03-01 01:19:44 | INFO  | Waiting for import to complete... 2026-03-01 01:20:21.103366 | orchestrator | 2026-03-01 01:19:54 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-01 01:20:21.103373 | orchestrator | 2026-03-01 01:19:54 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-01 01:20:21.103381 | orchestrator | 2026-03-01 01:19:54 | INFO  | Setting internal_version = 0.6.2 2026-03-01 01:20:21.103388 | orchestrator | 2026-03-01 01:19:54 | INFO  | Setting image_original_user = cirros 2026-03-01 01:20:21.103395 | orchestrator | 2026-03-01 01:19:54 | INFO  | Adding tag os:cirros 2026-03-01 01:20:21.103402 | orchestrator | 2026-03-01 01:19:54 | INFO  | Setting property architecture: x86_64 2026-03-01 01:20:21.103410 | orchestrator | 2026-03-01 01:19:55 | INFO  | Setting property hw_disk_bus: scsi 2026-03-01 01:20:21.103417 | orchestrator | 2026-03-01 01:19:55 | INFO  | Setting property hw_rng_model: virtio 2026-03-01 01:20:21.103424 | orchestrator | 2026-03-01 01:19:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-01 01:20:21.103431 | orchestrator | 2026-03-01 01:19:56 | INFO  | Setting property hw_watchdog_action: reset 2026-03-01 01:20:21.103438 | orchestrator | 2026-03-01 01:19:56 | INFO  | Setting property hypervisor_type: qemu 2026-03-01 01:20:21.103445 | orchestrator | 2026-03-01 01:19:56 | INFO  | Setting property os_distro: cirros 2026-03-01 01:20:21.103458 | orchestrator | 2026-03-01 01:19:56 | INFO  | Setting property os_purpose: minimal 2026-03-01 01:20:21.103465 | orchestrator | 2026-03-01 01:19:57 | INFO  | Setting property replace_frequency: never 2026-03-01 01:20:21.103472 | orchestrator | 2026-03-01 01:19:57 | INFO  | Setting property uuid_validity: none 2026-03-01 01:20:21.103479 | orchestrator | 2026-03-01 01:19:57 | INFO  | Setting property provided_until: none 2026-03-01 01:20:21.103485 | orchestrator | 2026-03-01 01:19:57 | INFO  | Setting property image_description: Cirros 2026-03-01 01:20:21.103492 | orchestrator | 2026-03-01 01:19:58 | INFO  | Setting property image_name: Cirros 2026-03-01 01:20:21.103499 | orchestrator | 2026-03-01 01:19:58 | INFO  | Setting property internal_version: 0.6.2 2026-03-01 01:20:21.103523 | orchestrator | 2026-03-01 01:19:58 | INFO  | Setting property image_original_user: cirros 2026-03-01 01:20:21.103530 | orchestrator | 2026-03-01 01:19:58 | INFO  | Setting property os_version: 0.6.2 2026-03-01 01:20:21.103537 | orchestrator | 2026-03-01 01:19:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-01 01:20:21.103545 | orchestrator | 2026-03-01 01:20:00 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-01 01:20:21.103552 | orchestrator | 2026-03-01 01:20:00 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-01 01:20:21.103559 | orchestrator | 2026-03-01 01:20:00 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-01 01:20:21.103565 | orchestrator | 2026-03-01 01:20:00 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-01 01:20:21.103576 | orchestrator | 2026-03-01 01:20:00 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-01 01:20:21.103583 | orchestrator | 2026-03-01 01:20:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-01 01:20:21.103590 | orchestrator | 2026-03-01 01:20:00 | INFO  | Importing image Cirros 0.6.3 2026-03-01 01:20:21.103597 | orchestrator | 2026-03-01 01:20:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-01 01:20:21.103604 | orchestrator | 2026-03-01 01:20:02 | INFO  | Waiting for image to leave queued state... 2026-03-01 01:20:21.103611 | orchestrator | 2026-03-01 01:20:04 | INFO  | Waiting for import to complete... 2026-03-01 01:20:21.103617 | orchestrator | 2026-03-01 01:20:14 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-01 01:20:21.103637 | orchestrator | 2026-03-01 01:20:15 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-01 01:20:21.103645 | orchestrator | 2026-03-01 01:20:15 | INFO  | Setting internal_version = 0.6.3 2026-03-01 01:20:21.103652 | orchestrator | 2026-03-01 01:20:15 | INFO  | Setting image_original_user = cirros 2026-03-01 01:20:21.103658 | orchestrator | 2026-03-01 01:20:15 | INFO  | Adding tag os:cirros 2026-03-01 01:20:21.103665 | orchestrator | 2026-03-01 01:20:15 | INFO  | Setting property architecture: x86_64 2026-03-01 01:20:21.103672 | orchestrator | 2026-03-01 01:20:15 | INFO  | Setting property hw_disk_bus: scsi 2026-03-01 01:20:21.103679 | orchestrator | 2026-03-01 01:20:15 | INFO  | Setting property hw_rng_model: virtio 2026-03-01 01:20:21.103685 | orchestrator | 2026-03-01 01:20:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-01 01:20:21.103692 | orchestrator | 2026-03-01 01:20:16 | INFO  | Setting property hw_watchdog_action: reset 2026-03-01 01:20:21.103699 | orchestrator | 2026-03-01 01:20:16 | INFO  | Setting property hypervisor_type: qemu 2026-03-01 01:20:21.103706 | orchestrator | 2026-03-01 01:20:16 | INFO  | Setting property os_distro: cirros 2026-03-01 01:20:21.103713 | orchestrator | 2026-03-01 01:20:17 | INFO  | Setting property os_purpose: minimal 2026-03-01 01:20:21.103719 | orchestrator | 2026-03-01 01:20:17 | INFO  | Setting property replace_frequency: never 2026-03-01 01:20:21.103726 | orchestrator | 2026-03-01 01:20:17 | INFO  | Setting property uuid_validity: none 2026-03-01 01:20:21.103733 | orchestrator | 2026-03-01 01:20:17 | INFO  | Setting property provided_until: none 2026-03-01 01:20:21.103740 | orchestrator | 2026-03-01 01:20:18 | INFO  | Setting property image_description: Cirros 2026-03-01 01:20:21.103747 | orchestrator | 2026-03-01 01:20:18 | INFO  | Setting property image_name: Cirros 2026-03-01 01:20:21.103759 | orchestrator | 2026-03-01 01:20:18 | INFO  | Setting property internal_version: 0.6.3 2026-03-01 01:20:21.103766 | orchestrator | 2026-03-01 01:20:19 | INFO  | Setting property image_original_user: cirros 2026-03-01 01:20:21.103773 | orchestrator | 2026-03-01 01:20:19 | INFO  | Setting property os_version: 0.6.3 2026-03-01 01:20:21.103780 | orchestrator | 2026-03-01 01:20:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-01 01:20:21.103787 | orchestrator | 2026-03-01 01:20:19 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-01 01:20:21.103793 | orchestrator | 2026-03-01 01:20:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-01 01:20:21.103800 | orchestrator | 2026-03-01 01:20:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-01 01:20:21.103807 | orchestrator | 2026-03-01 01:20:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-01 01:20:21.439623 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-01 01:20:23.899803 | orchestrator | 2026-03-01 01:20:23 | INFO  | date: 2026-02-28 2026-03-01 01:20:23.899976 | orchestrator | 2026-03-01 01:20:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20260228.qcow2 2026-03-01 01:20:23.900018 | orchestrator | 2026-03-01 01:20:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260228.qcow2 2026-03-01 01:20:23.900034 | orchestrator | 2026-03-01 01:20:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260228.qcow2.CHECKSUM 2026-03-01 01:20:24.064779 | orchestrator | 2026-03-01 01:20:24 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/work/logs" 2026-03-01 01:21:01.599806 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/work/artifacts" 2026-03-01 01:21:01.881935 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0a59ffbc013c4e8c87d4525fe06cbd45/work/docs" 2026-03-01 01:21:01.905129 | 2026-03-01 01:21:01.905280 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-01 01:21:02.793555 | orchestrator | changed: .d..t...... ./ 2026-03-01 01:21:02.793830 | orchestrator | changed: All items complete 2026-03-01 01:21:02.793875 | 2026-03-01 01:21:03.528538 | orchestrator | changed: .d..t...... ./ 2026-03-01 01:21:04.242982 | orchestrator | changed: .d..t...... ./ 2026-03-01 01:21:04.266883 | 2026-03-01 01:21:04.267024 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-01 01:21:04.306056 | orchestrator | skipping: Conditional result was False 2026-03-01 01:21:04.309206 | orchestrator | skipping: Conditional result was False 2026-03-01 01:21:04.329079 | 2026-03-01 01:21:04.329312 | PLAY RECAP 2026-03-01 01:21:04.329401 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-01 01:21:04.329443 | 2026-03-01 01:21:04.457958 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-01 01:21:04.460768 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-01 01:21:05.260932 | 2026-03-01 01:21:05.261106 | PLAY [Base post] 2026-03-01 01:21:05.276340 | 2026-03-01 01:21:05.276490 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-01 01:21:06.280042 | orchestrator | changed 2026-03-01 01:21:06.289949 | 2026-03-01 01:21:06.290076 | PLAY RECAP 2026-03-01 01:21:06.290149 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-01 01:21:06.290226 | 2026-03-01 01:21:06.409371 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-01 01:21:06.411854 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-01 01:21:07.194665 | 2026-03-01 01:21:07.194858 | PLAY [Base post-logs] 2026-03-01 01:21:07.205256 | 2026-03-01 01:21:07.205422 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-01 01:21:07.668465 | localhost | changed 2026-03-01 01:21:07.678452 | 2026-03-01 01:21:07.678599 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-01 01:21:07.715295 | localhost | ok 2026-03-01 01:21:07.720535 | 2026-03-01 01:21:07.720694 | TASK [Set zuul-log-path fact] 2026-03-01 01:21:07.738512 | localhost | ok 2026-03-01 01:21:07.752487 | 2026-03-01 01:21:07.752614 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-01 01:21:07.790591 | localhost | ok 2026-03-01 01:21:07.797290 | 2026-03-01 01:21:07.797467 | TASK [upload-logs : Create log directories] 2026-03-01 01:21:08.309214 | localhost | changed 2026-03-01 01:21:08.313343 | 2026-03-01 01:21:08.313490 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-01 01:21:08.838757 | localhost -> localhost | ok: Runtime: 0:00:00.006997 2026-03-01 01:21:08.847473 | 2026-03-01 01:21:08.847643 | TASK [upload-logs : Upload logs to log server] 2026-03-01 01:21:09.435819 | localhost | Output suppressed because no_log was given 2026-03-01 01:21:09.439705 | 2026-03-01 01:21:09.439884 | LOOP [upload-logs : Compress console log and json output] 2026-03-01 01:21:09.500540 | localhost | skipping: Conditional result was False 2026-03-01 01:21:09.505417 | localhost | skipping: Conditional result was False 2026-03-01 01:21:09.510307 | 2026-03-01 01:21:09.510415 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-01 01:21:09.567047 | localhost | skipping: Conditional result was False 2026-03-01 01:21:09.567770 | 2026-03-01 01:21:09.571360 | localhost | skipping: Conditional result was False 2026-03-01 01:21:09.576744 | 2026-03-01 01:21:09.576859 | LOOP [upload-logs : Upload console log and json output]